You are browsing the docs for Nexus v1.6.x, the latest release is available here
v1.0 To v1.1 Migration
As part of the v1.1 release there were two significant changes that require a primary store migration:
- Updates to the event model
- ElasticSearch default view mapping update for 7.x compatibility
Additionally, due to a bug in the underlying persistence library some events may have been skipped when the tag_views
table is populated as a reverse index for event tags.
The migration steps are as follows:
- Stop the 3 services: iam, admin and kg.
- Backup the cassandra store. This step is a safeguard against anything which could go wrong during migration scripts. It’s also a good practice to protect from hardware failures.
- Delete all ElasticSearch indices
curl -XDELETE 'http://{elasticsearch_host}/kg_*'
-
Delete all BlazeGraph namespaces:
for i in `curl -s 'http://{blazegraph_host}/blazegraph/namespace?describe-each-named-graph=false' | grep sparqlEndpoint | grep -o --color "rdf:resource=\"[^\"]*" | sed 's/rdf:resource="//' | sed 's#/sparql$##' | grep -v kb | grep -v LBS` do curl -X DELETE "$i" done
-
Update Elasticsearch software from 6.x to 7.2 or above.
- Deploy the new service images (tag
1.1
): iam, admin and kg; make sure to start the iam, admin and kg services one by one, in order and wait for the service to become available, following the instructions:
- When running iam and admin services for the first time in v1.1, set the environment variable
REPAIR_FROM_MESSAGES
totrue
. This environment variable should be removed in subsequent runs. It makes services rebuild the cassandra database from themessages
table and correcting any problems with thetag_views
table - When running kg service for the first time in v1.1, set the environment variable
MIGRATE_V10_TO_V11
totrue
. This environment variable should be removed in subsequent runs. It triggers the event migration, full rebuild of the cassandra database from themessages
table and correcting any problems with thetag_views
table.
Migration process
This section explains what happens during the migration process. Although the knowledge of what is happening behind the scenes should not be necessary to successfully migrate to v1.1, it might be of interest.
Iam and admin migration process
The migration for these services is straight forward since the changes introduced did not affect the main models in the cassandra store. The migration is essentially a repair of the cassandra store to make sure that every row which should be present in the tag_views
cassandra table, is indeed there. To achieve that, during the boot process the service wipes all tables but messages
and materializes all persistent actors found in the messages
table to force repair all dependent tables.
Kg migration process
The migration for this service includes several steps, since the content on the cassandra store messages
table requires modifications as per the description above.
The following steps are executed:
- all tables but
messages
are wiped - all project information is loaded from
admin.messages
- all events in the
messages
table are migrated to the new model - a full rebuild of the cassandra store is performed by materializing all persistent actors found in the
messages
table to force repair all dependent tables (equivalent to the behaviour of theREPAIR_FROM_MESSAGES
flag for the iam and admin services) - restart all index processes from scratch
Depending on the volume of data stored in the system, the migration process can take a fairly long time. During this window the service does not bind on the HTTP port to avoid any de-synchronization.