Using the latest version?
JFrog Platform User Guide
JFrog Artifactory 6.x Documentation
To get the latest version, go to the JFrog Unified Platform
Configuring the Migration
在迁移之前数据从NFS,sure all nodes in your HA cluster are up and running. Then, to configure migration of your data for the use cases described above, follow the procedure below:
- Verify versions
- Verify configuration files are synchronized
- Edit the
ha-node.properties
file - Copy data to the new location
- Configure binarystore.xmlto match your setup
- Test the configuration
Verifying Versions
Before proceeding with transferring your data, you need to verify that all cluster nodes are installed with exactly the same version which must be 5.0 and above. To verify the version running on each node in your HA cluster, in theAdminmodule underConfiguration | High Availability,check theVersioncolumn of the table displaying your HA nodes.
Verify Configuration Files are Synchronized
When upgrading your HA cluster from version 4.x to version 5.x, an automatic conversion process synchronizes the configuration files for all the cluster nodes. This replaces the need for the$CLUSTER_HOME/ha-etc
folder that was used in v4.x.一旦你确认所有节点都运行the same version, you should verify that all configuration files are synchronized between the nodes. For each node, navigate to its$ARTIFACTORY_HOME/etc
folder and verify the following:
ha-node.properties |
Each node should still have this file configured as described inCreate ha-node.properties. |
db.properties |
This file was introduced in Artifactory 5.0 and it defines the connection to the database. The password specified in this file is encrypted by the key in themaster.key file. It should be identical in each cluster node. |
binarystore.xml |
This file opens up the full set of options to configure your binary storage without the NFS. It will contain the binary provider configuration according to how you wish to store your binaries. For each of the use cases described above, you can find the corresponding binary provider configuration underConfigure binarystore.xml. |
master.key |
This file contains the key used to encrypt and decrypt files that are used to synchronize the cluster nodes. It should be identical on each cluster node. |
From version 5.0, Artifactory HA synchronizes configuration files from the primary to all secondary nodes, a change made to one of these files on the primary triggers the mechanism to synchronize the change to the other nodes.
Sync carefully
因为在一个节点上自动synchr变化onized to the other nodes, take care not to simultaneously modify the same file on two different nodes since changes you make on one node could overwrite the changes you make on the other one.
Edit the ha-node.properties File
Locate the ha-node.properties file in each node under the$ARTIFACTORY_HOME/etc
and comment out or remove the following entries otherwise Artifactory will continue write according to the previous path you have configured to the shared file system.
artifactory.ha.data.dir=/var/opt/jfrog/artifactory-ha artifactory.ha.backup.dir=/var/opt/jfrog/artifactory-backup
Copy Data to the New Location
Once you have verified your configuration files are correctly synchronized, you are ready to migrate your data. The sub-sections below describe how to migrate your data for the three use-cases described in theOverviewabove.
Use Case 1: NFS → Local FS
For this use case, we first need to ensure that there is enough storage available on each node to accommodate the volume of data in my/data
folder and the desired redundancy. In general, you need to comply with the following formula:
Max storage * redundancy < total space available on all nodes
For example,
- If you expect the maximum storage in your environment to be100 TB
- Your redundancy is2
- You have4 nodesin your cluster,
Then each node should have at least50 TBof storage available.
For a redundancy of N, copy the data from your NFS to N of the nodes in your cluster.
For example, for a redundancy of 2, and assuming you have two nodes named "Node1" and "Node2" respectively, copy the$CLUSTER_HOME/ha-data
folder to the$ARTIFACTORY_HOME/data
folder on each of Node1 and Node2.
Optimize distribution of your files
Once you have copied your filestore to to each of the N nodes according to the desired redundancy, we recommend invoking theOptimize System StorageREST API endpoint in order to optimize the storage by balancing it storage amongst all nodes in the cluster.
Use Case 2:NFS Eventual + S3: → Local FS Eventual + S3
This use case refers to using S3 as persistent storage, but is equally applicable to other cloud object store providers such as GCS, CEPH, OpenStack and other supported vendors.
In this use case, you only need to ensure that there areno files in theeventual
folder of your NFS. If any files are still there, they should be moved to your cloud storage provider bucket, or to one of the nodes'eventual
folder.
Use Case 3: NFS →Local FS Eventual + S3
Migrating a filestore for a single installation to S3 is normally anautomatic procedurehandled by Artifactory, however, in the case of moving an HA filestore from the NFS, the automatic procedure does not work since the folder structure changes.
In this case, you need to copy the data under$CLUSTER_HOME/ha-datafrom your NFS to the bucket on your cloud storage provider (here too, other providers described in Use Case 2 are also supported) while making sure that there are no files left in the_queue
or_pre
folders of the eventual binary provider on your node's local file system.
Configure binarystore.xml
In this step you need to configure the binarystore.xml to match the setup you have selected in the use case. Note that the three use cases above use one of two final configurations:
All data is stored on the cluster node's local filesystem (labelled here asLocal FS)
The cluster nodes use the cluster node's local filesystem as an eventual binary provider and data is persistently stored on S3 (labelled here asLocal FS Eventual + S3)
Node downtime required
To modify the binarystore.xml file for a node, you first need to gracefully shut down the node, modify the file and then restart the node in order for your new configuration to take effect
Local FS
In this example, all data is stored on the nodes' file systems. For the sake of this example, we will assume that:
- We have 3 nodes
- We want redundancy = 1
To accomplish this setup, you need to:
Copy the data from the
$CLUSTER_HOME/ha-data
on your NFS to the$ARTIFACTORY_HOME/data
folder on two of the nodes.- Once all data has been copied, you need to place the binarystore.xml under
$ARTIFACTORY_HOME/etc
of each cluster node. - Finally, you need to gracefully restart each node for the changes to take effect.
Optimizing the redundant storage
After restarting your system, you can trigger optimization using the REST API so that all three nodes are utilized for redundancy. For details, please refer toOptimize System Storage.
Example
In this use case, the binarystore.xml used with the NFS before migration would look like the following if you are using one of the default file-system template.
After migrating the data, the new binarystore.xml placed on each cluster node you can use the cluster-file-system template.
While you don't need to configure anything else, this is what thecluster-file-systemtemplate looks like:
Redundancy leniency
We recommend adding the lenientLimit parameter to the below configuration under thesharding-clusterprovider configuration:
1
Without this parameter, Artifactory won't accept artifact deployments while the number of live nodes in your cluster is lower than the specified redundancy.
local remote crossNetworkStrategy crossNetworkStrategy 2
Local FS Eventual + S3
In this example, data is temporarily stored on the file system of each node using an Eventual binary provider, and is then passed on to your S3 object storage for persistent storage.
In this use case, the binarystore.xml used your NFS for cache and eventual with your object store on S3 before migration will look like the following if you are using the S3 template.
After migrating your filestore to S3 (and stopping to use the NFS), yourbinarystore.xml
should use thecluster-s3
template as follows:
Thecluster-s3
templatelooks like this:
Redundancy leniency
We recommend adding the lenientLimit parameter to the below configuration under thesharding-clusterprovider configuration:
1
Without this parameter, Artifactory won't accept artifact deployments while the number of live nodes in your cluster is lower than the specified redundancy.
crossNetworkStrategy crossNetworkStrategy 2 remote local http://s3.amazonaws.com [ENTER IDENTITY HERE] [ENTER CREDENTIALS HERE] [ENTER PATH HERE] [ENTER BUCKET NAME HERE]
Because you must configure the s3 provider with parameters specific to your account (but can leave all others with the recommended values), if you choose to use this template, yourbinarystore.xml
configuration file should look like this:
http://s3.amazonaws.com [ENTER IDENTITY HERE] [ENTER CREDENTIALS HERE] [ENTER PATH HERE] [ENTER BUCKET NAME HERE]
Testing Your Configuration
To test your configuration you can simply deploy an artifact to Artifactory and then inspect your persistent storage (whether on your node's file system on your cloud provider) and verify that the artifact has been stored correctly.