Version specific update instructions

Check this document if it includes instructions for the version you are updating. These steps go together with the general steps for updating Geo nodes.

Updating to GitLab 12.2

GitLab 12.2 includes the following minor PostgreSQL updates:

This update will occur even if major PostgreSQL updates are disabled.

Before refreshing Foreign Data Wrapper during a Geo HA upgrade, restart the Geo tracking database:

sudo gitlab-ctl restart geo-postgresql

The restart avoids a version mismatch when PostgreSQL tries to load the FDW extension.

Updating to GitLab 12.1

By default, GitLab 12.1 will attempt to automatically update the embedded PostgreSQL server to 10.7 from 9.6. Please see the omnibus documentation for the recommended procedure.

This can be temporarily disabled by running the following before updating:

sudo touch /etc/gitlab/disable-postgresql-upgrade

Updating to GitLab 10.8

Before 10.8, broadcast messages would not propagate without flushing the cache on the secondary nodes. This has been fixed in 10.8, but requires one last cache flush on each secondary node:

sudo gitlab-rake cache:clear

Updating to GitLab 10.6

In 10.4, we started to recommend that you define a password for database user (gitlab).

We now require this change as we use this password to enable the Foreign Data Wrapper, as a way to optimize the Geo Tracking Database. We are also improving security by disabling the use of trust authentication method.

  1. (primary) Login to your primary node and run:

sh gitlab-ctl pg-password-md5 gitlab # Enter password: <your_password_here> # Confirm password: <your_password_here> # fca0b89a972d69f00eb3ec98a5838484

Copy the generated hash and edit /etc/gitlab/gitlab.rb:

``ruby # Fill with the hash generated bygitlab-ctl pg-password-md5 gitlab` postgresql['sql_user_password'] = ''

# Every node that runs Unicorn or Sidekiq needs to have the database # password specified as below. If you have a high-availability setup, this # must be present in all application nodes. gitlab_rails['db_password'] = '' ```

Still in the configuration file, locate and remove the trust_auth_cidr_address:

ruby postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','1.2.3.4/32'] # <- Remove this

  1. (primary) Reconfigure and restart:

sh sudo gitlab-ctl reconfigure sudo gitlab-ctl restart

  1. (secondary) Login to all secondary nodes and edit /etc/gitlab/gitlab.rb:

``ruby # Fill with the hash generated bygitlab-ctl pg-password-md5 gitlab` postgresql['sql_user_password'] = ''

# Every node that runs Unicorn or Sidekiq needs to have the database # password specified as below. If you have a high-availability setup, this # must be present in all application nodes. gitlab_rails['db_password'] = ''

# Enable Foreign Data Wrapper geo_secondary['db_fdw'] = true

# Secondary address in CIDR format, for example '5.6.7.8/32' postgresql['md5_auth_cidr_addresses'] = ['/32'] ```

Still in the configuration file, locate and remove the trust_auth_cidr_address:

ruby postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','5.6.7.8/32'] # <- Remove this

  1. (secondary) Reconfigure and restart:

sh sudo gitlab-ctl reconfigure sudo gitlab-ctl restart

Updating to GitLab 10.5

For Geo Disaster Recovery to work with minimum downtime, your secondary node should use the same set of secrets as the primary node. However, setup instructions prior to the 10.5 release only synchronized the db_key_base secret.

To rectify this error on existing installations, you should overwrite the contents of /etc/gitlab/gitlab-secrets.json on each secondary node with the contents of /etc/gitlab/gitlab-secrets.json on the primary node, then run the following command on each secondary node:

sudo gitlab-ctl reconfigure

If you do not perform this step, you may find that two-factor authentication is broken following DR.

To prevent SSH requests to the newly promoted primary node from failing due to SSH host key mismatch when updating the primary node domain's DNS record you should perform the step to Manually replicate primary SSH host keys in each secondary node.

Updating to GitLab 10.3

Support for SSH repository synchronization removed

In GitLab 10.2, synchronizing secondaries over SSH was deprecated. In 10.3, support is removed entirely. All installations will switch to the HTTP/HTTPS cloning method instead. Before updating, ensure that all your Geo nodes are configured to use this method and that it works for your installation. In particular, ensure that Git access over HTTP/HTTPS is enabled.

Synchronizing repositories over the public Internet using HTTP is insecure, so you should ensure that you have HTTPS configured before updating. Note that file synchronization is also insecure in these cases!

Updating to GitLab 10.2

Secure PostgreSQL replication

Support for TLS-secured PostgreSQL replication has been added. If you are currently using PostgreSQL replication across the open internet without an external means of securing the connection (e.g., a site-to-site VPN), then you should immediately reconfigure your primary and secondary PostgreSQL instances according to the updated instructions.

If you are securing the connections externally and wish to continue doing so, ensure you include the new option --sslmode=prefer in future invocations of gitlab-ctl replicate-geo-database.

HTTPS repository sync

Support for replicating repositories and wikis over HTTP/HTTPS has been added. Replicating over SSH has been deprecated, and support for this option will be removed in a future release.

To switch to HTTP/HTTPS replication, log into the primary node as an admin and visit Admin Area > Geo (/admin/geo/nodes). For each secondary node listed, press the "Edit" button, change the "Repository cloning" setting from "SSH (deprecated)" to "HTTP/HTTPS", and press "Save changes". This should take effect immediately.

Any new secondaries should be created using HTTP/HTTPS replication - this is the default setting.

After you've verified that HTTP/HTTPS replication is working, you should remove the now-unused SSH keys from your secondaries, as they may cause problems if the secondary node if ever promoted to a primary node:

  1. (secondary) Login to all your secondary nodes and run:

ruby sudo -u git -H rm ~git/.ssh/id_rsa ~git/.ssh/id_rsa.pub

Hashed Storage

CAUTION: Warning: Hashed storage is in Alpha. It is considered experimental and not production-ready. See Hashed Storage for more detail.

If you previously enabled Hashed Storage and migrated all your existing projects to Hashed Storage, disabling hashed storage will not migrate projects to their previous project based storage path. As such, once enabled and migrated we recommend leaving Hashed Storage enabled.

Updating to GitLab 10.1

CAUTION: Warning: Hashed storage is in Alpha. It is considered experimental and not production-ready. See Hashed Storage for more detail.

Hashed storage was introduced in GitLab 10.0, and a migration path for existing repositories was added in GitLab 10.1.

Updating to GitLab 10.0

Since GitLab 10.0, we require all Geo systems to use SSH key lookups via the database to avoid having to maintain consistency of the authorized_keys file for SSH access. Failing to do this will prevent users from being able to clone via SSH.

Note that in older versions of Geo, attachments downloaded on the secondary nodes would be saved to the wrong directory. We recommend that you do the following to clean this up.

On the secondary Geo nodes, run as root:

mv /var/opt/gitlab/gitlab-rails/working /var/opt/gitlab/gitlab-rails/working.old
mkdir /var/opt/gitlab/gitlab-rails/working
chmod 700 /var/opt/gitlab/gitlab-rails/working
chown git:git /var/opt/gitlab/gitlab-rails/working

You may delete /var/opt/gitlab/gitlab-rails/working.old any time.

Once this is done, we advise restarting GitLab on the secondary nodes for the new working directory to be used:

sudo gitlab-ctl restart

Updating from GitLab 9.3 or older

If you started running Geo on GitLab 9.3 or older, we recommend that you resync your secondary PostgreSQL databases to use replication slots. If you started using Geo with GitLab 9.4 or 10.x, no further action should be required because replication slots are used by default. However, if you started with GitLab 9.3 and updated later, you should still follow the instructions below.

When in doubt, it does not hurt to do a resync. The easiest way to do this in Omnibus is the following:

  1. Make sure you have Omnibus GitLab on the primary server.
  2. Run gitlab-ctl reconfigure and gitlab-ctl restart postgresql. This will enable replication slots on the primary database.
  3. Check the steps about defining postgresql['sql_user_password'], gitlab_rails['db_password'].
  4. Make sure postgresql['max_replication_slots'] matches the number of secondary Geo nodes locations.
  5. Install GitLab on the secondary server.
  6. Re-run the database replication process.

Updating to GitLab 9.0

IMPORTANT: With GitLab 9.0, the PostgreSQL version is updated to 9.6 and manual steps are required in order to update the secondary nodes and keep the Streaming Replication working. Downtime is required, so plan ahead.

The following steps apply only if you update from a 8.17 GitLab version to 9.0+. For previous versions, update to GitLab 8.17 first before attempting to update to 9.0+.


Make sure to follow the steps in the exact order as they appear below and pay extra attention in what node (either primary or secondary) you execute them! Each step is prepended with the relevant node for better clarity:

  1. (secondary) Log in to all your secondary nodes and stop all services:

ruby sudo gitlab-ctl stop

  1. (secondary) Make a backup of the recovery.conf file on all secondary nodes to preserve PostgreSQL's credentials:

sh sudo cp /var/opt/gitlab/postgresql/data/recovery.conf /var/opt/gitlab/

  1. (primary) Update the primary node to GitLab 9.0 following the regular update docs. At the end of the update, the primary node will be running with PostgreSQL 9.6.

  2. (primary) To prevent a de-synchronization of the repository replication, stop all services except postgresql as we will use it to re-initialize the secondary node's database:

sh sudo gitlab-ctl stop sudo gitlab-ctl start postgresql

  1. (secondary) Run the following steps on each of the secondary nodes:

  2. (secondary) Stop all services:

    sh sudo gitlab-ctl stop

  3. (secondary) Prevent running database migrations:

    sh sudo touch /etc/gitlab/skip-auto-migrations

  4. (secondary) Move the old database to another directory:

    sh sudo mv /var/opt/gitlab/postgresql{,.bak}

  5. (secondary) Update to GitLab 9.0 following the regular update docs. At the end of the update, the node will be running with PostgreSQL 9.6.

  6. (secondary) Make sure all services are up:

    sh sudo gitlab-ctl start

  7. (secondary) Reconfigure GitLab:

    sh sudo gitlab-ctl reconfigure

  8. (secondary) Run the PostgreSQL upgrade command:

    sh sudo gitlab-ctl pg-upgrade

  9. (secondary) See the stored credentials for the database that you will need to re-initialize the replication:

    sh sudo grep -s primary_conninfo /var/opt/gitlab/recovery.conf

  10. (secondary) Save the snippet below in a file, let's say /tmp/replica.sh. Modify the embedded paths if necessary:

    ``` #!/bin/bash

    PORT="5432" USER="gitlab_replicator" echo --------------------------------------------------------------- echo WARNING: Make sure this script is run from the secondary server echo --------------------------------------------------------------- echo echo Enter the IP or FQDN of the primary PostgreSQL server read HOST echo Enter the password for $USER@$HOST read -s PASSWORD echo Enter the required sslmode read SSLMODE

    echo Stopping PostgreSQL and all GitLab services sudo service gitlab stop sudo service postgresql stop

    echo Backing up postgresql.conf sudo -u postgres mv /var/opt/gitlab/postgresql/data/postgresql.conf /var/opt/gitlab/postgresql/

    echo Cleaning up old cluster directory sudo -u postgres rm -rf /var/opt/gitlab/postgresql/data

    echo Starting base backup as the replicator user echo Enter the password for $USER@$HOST sudo -u postgres /opt/gitlab/embedded/bin/pg_basebackup -h $HOST -D /var/opt/gitlab/postgresql/data -U gitlab_replicator -v -x -P

    echo Writing recovery.conf file sudo -u postgres bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- EOF1 standby_mode = 'on' primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD sslmode=$SSLMODE' EOF1 "

    echo Restoring postgresql.conf sudo -u postgres mv /var/opt/gitlab/postgresql/postgresql.conf /var/opt/gitlab/postgresql/data/

    echo Starting PostgreSQL sudo service postgresql start ```

  11. (secondary) Run the recovery script using the credentials from the previous step:

    sh sudo bash /tmp/replica.sh

  12. (secondary) Reconfigure GitLab:

    sh sudo gitlab-ctl reconfigure

  13. (secondary) Start all services:

    sh sudo gitlab-ctl start

  14. (secondary) Repeat the steps for the remaining secondary nodes.

  15. (primary) After all secondary nodes are updated, start all services in primary node:

sh sudo gitlab-ctl start

Update tracking database on secondary node

After updating a secondary node, you might need to run migrations on the tracking database. The tracking database was added in GitLab 9.1, and it is required since 10.0.

  1. Run database migrations on tracking database:

sh sudo gitlab-rake geo:db:migrate

  1. Repeat this step for each secondary node.