Installing Citrix Workplace on Ubuntu Linux

Installation

$ cd Downloads
$ sudo dpkg -i Citrix-Workspace.deb

Installing a CA Certificate

The pre-installed certificates might work for you. If you get errors when trying to use Citrix, then you’ll likely need to install a CA certificate.

  • Go to the VPN website you use Citrix for.
  • Click on the lock in front of the url
  • Click on certificate (Chrome) or Connection Secure (Firefox)
  • Note the authority under Issued By (Chrome) or Verified By (Firefox)
  • For example, the certification authority might be: DigiCert TLS RSA SHA256 2020 CA1
  • Search for DigiCert TLS RSA SHA256 2020 CA1 in your favorite search engine.
  • Select the official site that allows you to download the relevant certificates.
  • Download both the PEM and the CRT files.
  • Do the following:
$ sudo cp ~/Downloads/DigiCertTLSRSASHA2562020CA1-1.pem /opt/Citrix/ICAClient/keystore/cacerts
$ sudo chmod 644 /opt/Citrix/ICAClient/keystore/cacerts/DigiCertTLSRSASHA2562020CA1-1.pem
$ sudo /opt/Citrix/ICAClient/util/ctx_rehash

Note: The instructions on the Citrix website seem to be incorrect. It tells you to cp the pem file with a crt extention, even though every other file in the directory is a PEM file. The above copies to the default Citrix directory on Ubuntu, changes the file permissions to -rw-r–r–, and rehashes the new certificate so Citrix can use it.

X.509 Certificate for Chrome or Firefox Browsers

I’m not sure if this is strictly necessary, but it might also be helpful to import the X.509 certificate into Chrome or Firefox. For Chrome (Firefox is similar), do the following:

  • Go to the three dots (hamburger)
  • Select Chrome settings
  • Search for: certificate
  • Click on Security
  • Click on Manage Certificates
  • Click on Authorities
  • Click on Import
  • Select ~/Downloads/DigiCertTLSRSASHA2562020CA1-1.crt
  • Select all three options.

Forecasting a Cryptocurrency’s Price

April 20, 2022 Update: I’m noticing a bit of traffic for this post. Two corrections need to be made to the text below.

  1. The emissions contract for Ergo is sent to be changed so it extends emissions out a few decades. This will change the total Ergo in circulation to about half in this time frame. Let’s call it 60 million.
  2. There’s a mathematical error. I didn’t carry the decimal far enough. It’s 0.00032210 for market share.

So, if you assume a crypto market capitalization of $7 trillion, coins in circulation of 60 million and a 0.002 market share an increase of about 7 times, this gives us ($7 trillion * 0.002) / 60,000,000 = $233.22.

When I have a moment, I’ll update the rest of the numbers for this post.

Disclosure: I own Ergo. This is a condensed summary of why I purchased it. I’m happy to share what I learned, but this is not investment advice. I don’t know you. I don’t know your situation. Cryptocurrencies are a speculative investment, and you could lose all your money. If that’s not something you can live with, then do something relatively safe, like invest in an index fund, a certificate of deposit at a major bank or U.S. Treasuries. Also, if you are making investment choices based solely on the suggestions of some random blog on WordPress, written by The Deity knows who, without engaging your own mind and taking responsibility for your own choices, then you deserve to lose all your money. Caveat emptor!

“a more rational way to look at things is the total crypto cap multiplied by the percentage of market dominance divided by number of coins.

so right now [Ergo is] at .02% when the whole crypto space achieves 8 trillion (4x) and if ergo were to gain 1% dominance it would be $226.53

$8T x .01 / 35,316,150 = 226.53

you change your numbers depending on your beliefs and timeline. so say 2025 you think it’ll be 4 trillion total crypto cap but you think ergo will get to 5 percent market cap then you get $8T x .05 / 35,316,150 = 566

edit: this is also why cardano sucks ass from a strictly financial viewpoint. put it through these numbers you’ll see”

King_Ghidra_’s comment from discussion “If ERGO has the market cap of … (Hopium version)” Reddit.com. .

Looking for Patterns in the Chart

I read the above, and I thought it was an interesting forecasting question. It seems to be a fairly common one. Let’s try to ballpark the numbers, as King Ghidra has done. However, I’d like to update these numbers using some data-centric assumptions rather than my beliefs.

Coinmarketcap has the data for Total Cryptocurrency Market Cap. Unfortunately, they don’t provide the data. So, we are going to have to develop a proxy.

If we look at the entire length of the Coinmarketcap chart, it looks like there’s a pattern, where there was a peak in January 2018 and another peak in May 2021. That’s roughly about 30 months. Before each peak, there was a 3-6 month ramp up that goes up by a multiple of 7. Is that a pattern? How do we measure the length of these series?

One way would be to measure the low between two highs. So, it gets to about a $100 billion in January 2019. It climbs by a multiple of three up above $300 billion by July 2019. Drops down to $150 billion by March 2020. Then, it increases from that low to above $300 billion by August 2020. From August 2020 to the peak in May 2021, there’s an 8 fold increase. From $350 billion to $2.5 trillion.

Let’s look at the run-up to the January 2018 peak. You’ll immediately notice that the scaling of the data makes it hard to know where to start. So, let’s look from the beginning of the series until it reaches $20 billion, or January 2017. At that scale that it was a low of $929 million in July 2013. There was a peak of $15.6 billion in December 2013.

It dropped from the December 2013 peak of $15 billion to $3.5 billion in January 2015. Let’s count the doublings:

  • $3.5 billion (January 2015)
  • $7 billion (November 2015, 11 months)
  • $15 billion (June 2016, 7 months)
  • $30 billion (April 2017, 10 months)
  • $60 billion (May 2017, 1 month)
  • $120 billion (August 2017, 3 months)
  • $240 billion (November 2017, 3 months)
  • $480 billion (December 2017, 1 month)
  • $815 billion (January 2018, 1 month)

Again, we see the same pattern, it takes a full year to drop from the January 2018 peak of $815 billion to $100 billion in January 2019. It triples and drops back down to $100 billion by April 2020. Then, let’s track the doubling again:

  • $150 billion (April 2020)
  • $300 billion (July 2020, 3 months)
  • $600 billion (December 2020, 5 months)
  • $1,200 billion (February 2021, 3 months)
  • $2,400 billion (May 2021, 3 months)

So, the size has gotten large enough that doublings are going to be less frequent. But, right now, have we hit a peak? Consider this: the cycles seem to be getting longer. In 2013, it was 5 months from bottom to top. In the 2015-2018 cycle, it was 35 months. If April 2020 is the starting point, then peak should be sometime in March 2023. If we assume a 14 month decline and 14 month recovery before another spike, the next peak after that one would be sometime in July 2025.

I’m inclined to think that the peak for this cycle will be a little sooner than March 2023, say sometime between July 2022 to December 2022. This will put the next peak somewhere around January 2025.

Forecasting the Total Cryptocurrency Market

After chart review, let’s try forecasting for January 1, 2025, which also has the nice property that it matches the quote introducing this topic. So, what kind of range to expect as possible by January 1, 2025? When I do a quarterly sampling of the data from Coinmarketcap, and then run it through R trying to determine a probability interval for 2025-01-01, I get a 95% confidence interval that total market cap will be between $0 and $6.2 trillion.[1,2] This approach likely underestimates the values because a quarterly sampling cuts out a lot of movement in the chart.

If I use Bitcoin as a proxy, then I can pull the data from the FRED database using the following command and these R scripts[3,4]

> fred(code="CBBTCUSD", begin_date="2017-01-01", closing_date="2025-01-01", bins=c(0.05, 0.5, 0.95), prob_type="probands")

This returns:

Projected mean: 242494.622314164
Projected standard deviation: 89419.7876711661
   bins     probs
1  0.05  95412.16
2  0.50 242494.62
3  0.95 389577.08

When I run the same data but putting it through a Monte Carlo function instead, I get a greater than 70% chance it will be above $250,000 on 2025-01-01.[4]

Let’s suppose Bitcoin is either $95,000, $250,000 or $390,000 on 2025-01-01. When you correlate the price of Bitcoin to the total cryptocurrency market cap on a quarterly basis (January 1, April 1, July 1 and October 1), it’s roughly price * 31 million with a standard deviation of about a million. This would put the price range as follows:

  • Low: Bitcoin price: $95,000, Total Cryptocurrency Market cap: $2.945 trillion
  • Projected: Bitcoin price: $250,000, Total Cryptocurrency Market cap: $7.750 trillion
  • High: Bitcoin Price: $390,000, Total Cryptocurrency Market cap: $12 trillion

As a sanity check, you can check the chart again. My best guess would be that total crypto market cap on 2025-01-01 will be between $6-8 trillion. For this question, let’s assume the lower end of the range – a $6 trillion market cap on January 1, 2025 – as being a good, conservative guess.

Forecasting a Cryptocurrency (Ergo)

Now, to return to the comment at the top, the poster proposes a formula:

(total crypto market cap on 2025-01-01 * market share of a cryptocurrency) / number of coins = price

Let’s make the calculation of market share and coins easy and establish a floor. Suppose that Erg maintains its current market share. On Saturday, September 25, 2021, the cost of Erg is $14 and there are 43,705,365 Erg in circulation. That gives us a Ergo market capitalization of ~$612 million. Total cryptocurrency capitalization today is $1.9 trillion. So, the market share is 0.0032210. For this forecast, let’s just assume the maximum number of Erg, 97,739,924.

Then, we calculate for different hypothetical values:

  • $4 trillion total crypto market cap, same market share, all coins: ($4 trillion * 0.003221) / 97,739,924 = $131.82
  • $6 trillion total crypto market cap, same market share, all coins: ($6 trillion * 0.003221) / 97,739,924 = $197.73
  • Double the market share at $6 trillion total crypto market capitalization, all coins: $263.64
  • $8 trillion total crypto market cap, same market share, all coins ($8 trillion * 0.003221) / 97,739,924 = $263.64
  • Double the market share at $6 trillion total crypto market capitalization, all coins: $395.29
  • Double the market share at $8 trillion total crypto market capitalization, all coins: $527.28
  • $6 trillion total crypto market cap, 1% market share, all coins: $613.87
  • $6 trillion total crypto market cap, 2% market share, all coins: $1,227.75

On review of the above, I’m forecasting that Erg has a 90% chance of getting above $1,000 before or on January 1, 2025. I think cryptocurrencies are starting to get major traction so the conservative $6 trillion is probably too low. Given Ergo technical capabilities, I can see as much as an order of magnitude increase in market share by January 1, 2025. That would give a top end of ($8 trillion * 0.03) / 97,739,924 = $2,455. More likely, it will be something like ($7 trillion * 0.015) / 97,739,924 = $1,074.27. Add in normal fluctuations, easy to see it crossing $1,000 in the period, if this is the base case.

Let’s check back in ~1,200 days or so and see how I did.

As a reality check, even if you invested $1,000 at $14, right now, that’s 71.428 Erg. If the price went to $1,000, that’s $71,428. So, hard to become a millionaire without getting into a cryptocurrency before it’s above $1. On the other end, it’s hard to know which of the thousands of coins to invest in at that stage. It’s a veritable chicken and egg problem. But, $71,428 isn’t a bad haul, a salary for a couple of years in many folk’s cases.

Conclusion

Most of the work is in trying to come up with a reasonable market cap for the entire crytpocurrency space. Once you have that number, it is fairly straight-forward to calculate a minimum based on current market share of a coin. The same procedure could be used to forecast any cryptocurrency you are interested in.

References

  1. https://gitlab.com/cafebedouin/gjp/-/blob/master/csv.R
  2. https://gitlab.com/cafebedouin/gjp/-/blob/master/functions/probands.R
  3. https://gitlab.com/cafebedouin/gjp/-/blob/master/fred.R
  4. https://gitlab.com/cafebedouin/gjp/-/blob/master/functions/monte-full.R

bash: TOTP From the Terminal With oathtool

TOTP is Time-based One Time Password. Most people use applications on their phone for TOTP, such as andOTP, Google Authenticator, and related apps. But, as we move from using a phone as a second factor for what we are doing on a computer to a phone being the primary way we interact with the Internet, it makes sense to make the computer the second factor. This is the idea behind this script. It is based on analyth’s script, except I stripped out the I/O.

#!/bin/bash

# Assign variables
google=$(oathtool --base32 --totp "YOUR SECRET KEY" -d 6)
wordpress=$(oathtool --base32 --totp "YOUR SECRET KEY" -d 6)
amazon=$(oathtool --base32 --totp "YOUR SECRET KEY" -d 6)

# Print variables
echo "google: ${google} | wordpress: ${wordpress} | amazon: ${amazon}"

This will print:

google: 123456 | wordpress: 123456 | amazon: 123456

However, I didn’t like the idea of my one time password codes only being protected by normal file protections on a Linux system. I thought it should be encrypted with gpg. So, I saved it to a file in my scripts directory, totp, and encrypted it with my public key. If you don’t have a gpg key pair, instructions are available online.

$ gpg -r your@email.com -e ~/pathto/totp

Then, to run the shell script, do:

$ gpg -d ~/pathto/totp.gpg 2>/dev/null | bash

This will prompt you for your gpg password and then run this script. You likely won’t want to remember this string of commands, so you could make your life easier by adding it as an alias under .bash_aliases

alias totp='gpg -d ~/pathto/totp.gpg 2>/dev/null | bash'

bash: Number of Days Between Today and Some Future Date

#!/bin/bash                                                        
                                                                   
printf -v date '%(%Y-%m-%d)T\n' -1                                 
echo $(( ($(date -d $1 +%s) - $(date -d $date +%s)) / 86400 )) days

Above is a bash script to output the number of days between today and some future date. Copy it into a file, e.g., diffdate.sh, into a directory, e.g., ~/bin/scripts. Then, enter the directory you saved it to and type to make it executable:

$ chmod +x diffdate.sh

Then, check your .profile to make sure something like this in it:

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then                             
  PATH="$HOME/bin:$PATH"
fi                                                                   

Then, run the script.

$ diffdate.sh 2021-06-01
70 days

I have to figure out the difference between today and some future date all the time for forecasting, and today was the day I finally bothered to figure out how to do it from the command line. I have to start thinking of ways to make shell scripts to do this little tasks that I go to the web for.

Forecasting in R: Probability Bins for Time-Series Data

This time-series.R script, below, takes a set of historical time series data and does a walk using the forecast period to generate probabilistic outcomes from the data set.

Input file is a csv file with two columns (Date, Value) with dates in reverse chronological order and in ISO-8601 format. Like so:

2019-08-06,1.73                                                                
2019-08-05,1.75                                                                
2019-08-02,1.86

Output is as follows:

0.466: Bin 1 - <1.7
0.328: Bin 2 - 1.7 to <=1.9
0.144: Bin 3 - 1.9+ to <2.1
0.045: Bin 4 - 2.1 to <=2.3
0.017: Bin 5 - 2.3+

Note: Patterns in data sets will skew results. A 20-year upward trend will make higher probabilities more likely. A volatile 5-year period will produce more conservative predictions and may not capture recent trends or a recent change in direction of movement.

R Script

# time-series.R 
# Original: December 4, 2018
# Last revised: December 4, 2018

#################################################
# Description: This script is for running any 
# sequence of historical time-series data to make 
# a forecast for five values by a particular date.
# Assumes a cvs file with two columns (Date, Value) 
# with dates in reverse chronological order and in
# ISO-8601 format. Like so:
#
# 2019-08-06,1.73                                                                
# 2019-08-05,1.75                                                                
# 2019-08-02,1.86

#Clear memory and set string option for reading in data:
rm(list=ls())
gc()

  #################################################
  # Function
  time-series <- function(time_path="./path/file.csv", 
                        closing_date="2020-01-01", trading_days=5, 
                         bin1=1.7, bin2=1.9, 
                         bin3=2.1, bin4=2.3) {

  #################################################
  # Libraries
  #
  # Load libraries. If library X is not installed
  # you can install it with this command at the R prompt:
  # install.packages('X') 

  # Determine how many days until end of question
  todays_date <- Sys.Date()
  closing_date <- as.Date(closing_date)
  remaining_weeks <- as.numeric(difftime(closing_date, todays_date, units = "weeks"))
  remaining_weeks <- round(remaining_weeks, digits=0)
  non_trading_days <- (7 - trading_days) * remaining_weeks
  day_difference <- as.numeric(difftime(closing_date, todays_date))
  remaining_days <- day_difference - non_trading_days 

  #################################################
  # Import & Parse
  # Point to time series data file and import it.
  time_import <- read.csv(time_path, header=FALSE) 
  colnames(time_import) <- c("date", "value")

  # Setting data types
  time_import$date <- as.Date(time_import$date)
  time_import$value <- as.vector(time_import$value)

  # Setting most recent value, assuming descending data
  current_value <- time_import[1,2]

  # Get the length of time_import$value and shorten it by remaining_days
  time_rows = length(time_import$value) - remaining_days

  # Create a dataframe
  time_calc <- NULL

  # Iterate through value and subtract the difference 
  # from the row remaining days away.
  for (i in 1:time_rows) {
    time_calc[i] <- time_import$value[i] - time_import$value[i+remaining_days]
  }

  # Adjusted against current values to match time_calc
  adj_bin1 <- bin1 - current_value
  adj_bin2 <- bin2 - current_value
  adj_bin3 <- bin3 - current_value 
  adj_bin4 <- bin4 - current_value 

  # Determine how many trading days fall in each question bin
  prob1 <- round(sum(time_calc<adj_bin1)/length(time_calc), digits = 3)
  prob2 <- round(sum(time_calc>=adj_bin1 & time_calc<=adj_bin2)/length(time_calc), digits = 3)
  prob3 <- round(sum(time_calc>adj_bin2 & time_calc<adj_bin3)/length(time_calc), digits = 3)
  prob4 <- round(sum(time_calc>=adj_bin3 & time_calc<=adj_bin4)/length(time_calc), digits = 3)
  prob5 <- round(sum(time_calc>adj_bin4)/length(time_calc), digits = 3)
  
  ###############################################
  # Print results
  return(cat(paste0(prob1, ": Bin 1 - ", "<", bin1, "\n",
                  prob2, ": Bin 2 - ", bin1, " to <=", bin2, "\n", 
                  prob3, ": Bin 3 - ", bin2, "+ to <", bin3, "\n", 
                  prob4, ": Bin 4 - ", bin3, " to <=", bin4, "\n", 
                  prob5, ": Bin 5 - ", bin4, "+", "\n")))
}

Forecasting with R Script: Graph of WHO Flu Data

# flu.R
# Original: November 2, 2018
# Last revised: December 3, 2018 

#################################################
# Prep: Go to the WHO Flumart website:
# http://apps.who.int/flumart/Default?ReportNo=12
# Select Year 2000 Week 1 to current year, week 52.
# Save file to the data directory. Change weeks below. 

#################################################
# Description:
# Script parses cvs file and provides a graph 
# with selectable yearly trend lines for 
# comparison, also includes analysis options at 
# bottom for predicting a particular day or 
# searching by cases.

# Clear memory
rm(list=ls())
gc()

#################################################
# Set Variables in Function
flu <- function(flu_file="./data/FluNetInteractiveReport.csv",
                week_start=1, week_end=52) {

  #################################################
  #PRELIMINARIES

  #Define basepath and set working directory:
  basepath = "~/Documents/programs/R/forecasting"
  setwd(basepath)

  #Preventing scientific notation in graphs
  options(scipen=999)

  #################################################
  #Libraries 
  # If library X is not installed, you can install 
  # it with this command: install.packages('X')
  library(plyr)
  library(tidyr)
  library(data.table)
  library(lubridate)
  library(stringr)
  library(ggplot2)
  library(dplyr)
  library(reshape2)
  library(corrplot)
  library(hydroGOF)
  library(Hmisc)
  library(forecast)
  library(tseries)

  #################################################
  # Import & Parse
  # Point to downloaded flu data file, variable is above.
  flumart <- read.csv(flu_file, skip=3, header=TRUE) 

  # Drop all the columns but the ones of interest
  flumart <- flumart[ -c(2,3,6:19,21) ]

  # Assign column names to something more reasonable
  colnames(flumart) <- c("Country", "Year", "Week", "Confirmed_Flu", "Prevalance")  

  # Assign the country variable from first column, second row
  country <- flumart[c(1),c(1)]

  # Incomplete years mess up correlation matrix
  flu_table <- filter(flumart, Year >= 2000)

  # Drop the non-numerical columns
  flu_table <- flu_table[,-c(1,5)]

  # Reshape the table into grid
  flu_table <- reshape(flu_table, direction="wide", idvar="Week", timevar="Year")

  # Fix column names after reshaping
  names(flu_table) <- gsub("Confirmed_Flu.", "", names(flu_table))
  
  # Put into matrix for correlations
  flu_table <- as.matrix(flu_table[,-c(1,5)])

  #################################################
  # Correlate & Plot
  flu_rcorr <- rcorr(flu_table)
  flu_coeff <- flu_rcorr$r
  flu_p <- flu_rcorr$P

  flu_matches <- flu_coeff[,ncol(flu_coeff)]
  flu_matches <- sort(flu_matches, decreasing = TRUE)
  flu_matches <- names(flu_matches)
  current_year <- as.numeric(flu_matches[1])
  matching_year1 <- as.numeric(flu_matches[2])
  matching_year2 <- as.numeric(flu_matches[3])
  matching_year3 <- as.numeric(flu_matches[4])
  matching_year4 <- as.numeric(flu_matches[5])
  matching_year5 <- as.numeric(flu_matches[6])

  #################################################
  # Prediction using ARIMA

  flu_data <- flumart # Importing initial flu data
  flu_data <- filter(flu_data, Week <= 52)
  flu_data <- flu_data[, -c(1:3,5)] # Remove Year & Week
  flu_ts <- ts(flu_data, start = 1, frequency=52) # ARIMA needs time series
  flu_data <- as.vector(flu_data)
  flu_fit <- auto.arima(flu_ts, D=1) 
  flu_pred <- forecast(flu_fit, h=52)
  flu_plot <- as.data.frame(Predicted_Mean <- (flu_pred$mean))
  flu_plot$Week <- as.numeric(1:nrow(flu_plot))
  
  flu_prediction <- ggplot() + 
    ggtitle("Predicted Flu Incidence") +
    geom_line(data = flu_plot, aes(x = Week, y = Predicted_Mean)) + 
    scale_x_continuous() + scale_y_continuous()

  #################################################
  # Graph

  # Creating a temp variable for graph
  flu_graph <- flumart

  # Filtering results for 5 year comparison, against 5 closest correlated years 
  flu_graph <- filter(flu_graph, Year == current_year | Year == matching_year1 |
                      Year == matching_year2 | Year == matching_year3 | 
                      Year ==matching_year4 | Year == matching_year5)

  # These variables need to be numerical
  flu_graph$Week <- as.numeric(flu_graph$Week)
  flu_graph$Confirmed_Flu <- as.numeric(flu_graph$Confirmed_Flu)

  # Limit to weeks of interest
  flu_graph <- filter(flu_graph, Week >= week_start)
  flu_graph <- filter(flu_graph, Week <= week_end)  

  # The variable used to color and split the data should be a factor so lines are properly drawn
  flu_graph$Year <- factor(flu_graph$Year)

  # Lays out sea_graph by day with a colored line for each year
  flu_compare <- ggplot() + 
    ggtitle(paste("Confirmed Flu in", country)) +
    geom_line(data = flu_graph, aes(x = Week, y = Confirmed_Flu, color = Year)) + 
    geom_line(data = flu_plot, aes(x = Week, y = Predicted_Mean, color="Forecast"))
    scale_x_continuous()

  flu_week <- flu_graph[nrow(flu_graph),3]
  flu_year <- flu_graph[nrow(flu_graph),2]
  summary(flu_graph)

  ###############################################
  # Printing
  # Creating a cvs file of changed data
  write.csv(flu_table, file=paste0("./output/flu-in-", country, "-table-",
                                   flu_year, "-week-", flu_week, ".csv")) 
  
  # Print flu_compare to screen
  flu_compare
  
  # Print flu
  ggsave(filename=paste0("./output/flu-in-", country, 
                         "-prediction-", flu_year, "-week-", flu_week, ".pdf"), plot=flu_prediction)
  
  # Print flu_plot to PDF
  ggsave(filename=paste0("./output/flu-in-", country, 
                         "-compare-", flu_year, "-week-", flu_week, ".pdf"), plot=flu_compare)
  
  # Print correlation matrix
  pdf(paste0("./output/flu-in-", country, "-correlation-", 
             flu_year, "-week-", flu_week, ".pdf"))
  corrplot(flu_coeff, method="pie", type="lower")
  dev.off()
    
  return(flu_graph[nrow(flu_graph),])
}