Have a question? Want to report an issue?Contact JFrog support

Skip to end of metadata
Go to start of metadata

Overview

From version 3.1, Artifactory supports a High Availability network configuration with a cluster of 2 or more, active/active, read/write Artifactory servers on the same Local Area Network (LAN).

Setting up several servers in an HA configuration is supported with anEnterprise Licenseand presents several benefits to your organization:

Maximize Uptime

Artifactory HA redundant network architecture means that there is no single-point-of-failure, and your system can continue to operate as long as at least one of the Artifactory nodes is operational. This maximizes your uptime and can take it to levels of up to "five nines" availability.

Manage Heavy Loads

通过使用冗余array of Artifactory server nodes in the network, your system can accommodate larger load bursts with no compromise to performance. With horizontal server scalability, you can easily increase your capacity to meet any load requirements as your organization grows.

Minimize Maintenance Downtime

By using an architecture with multiple Artifactory servers, Artifactory HA lets you perform most maintenance tasks with no system downtime.


Architecture

Artifactory HA architecture presents a Load Balancer connected to a cluster of two or more Artifactory servers that share a common database and Network File System. The Artifactory cluster nodes must be connected through a fast internal LAN in order to support high system performance as well as to stay synchronized and notify each other of actions performed in the system instantaneously. One of the Artifactory cluster nodes is configured to be a "master" node. Its roles are to execute cluster-wide tasks such as cleaning up unreferenced binaries.

JFrog support team is available to help you configure the Artifactory cluster nodes. It is up to your organization's IT staff to configure your load balancer, database and network file system.

Network Topology

Load Balancer

The load balancer is the entry point to your Artifactory HA installation and optimally distributes requests to the artifactory server nodes in your system.

Your load balancer must support session affinity (sticky sessions) and It is the responsibility of your organization to manage and configure it correctly.

The code samples below show some basic examples of load balancer configurations:

First install the following modules: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_http_module modules/mod_proxy_http.so Then configure as follows:  ServerAdmin admin@test.com ServerName apache-ha-test ServerAlias apache-ha-test.jfrog.local Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/artifactory/" env=BALANCER_ROUTE_CHANGED  # Artifactory server #1 BalancerMember http://10.0.0.32:8081 route=art1 # Artifactory server #2 BalancerMember http://10.0.0.33:8081 route=art2 ProxySet lbmethod=byrequests ProxySet stickysession=ROUTEID  ProxyPreserveHost on ProxyPass /balancer-manager ! ProxyPass / balancer://tomcats/ RewriteEngine On RewriteRule ^/$ /artifactory [R,L]  SetHandler balancer-manager Order deny,allow Allow from 10.0.0 192.168.0  LogLevel warn ErrorLog /var/log/httpd/apache-ha-test.error.log CustomLog /var/log/httpd/apache-ha-test.access.log combined 

http {

... ... ... upstream artifactory { ip_hash; # for stickiness by IP server IP_SERVER_1:8081; server IP_SERVER_2:8081; } server { listen 80; server_name YOUR_SERVER_NAME; ... ... ... rewrite ^/$ http://$host/artifactory/webapp/login.html; location / { proxy_pass http://artifactory; } } }

More details are available on thenginx website.

Artifactory HA Architecture

Artifactory Server Cluster

Each Artifactory server in the cluster receives requests routed to it by the load balancer. All servers share a common database and NFS mount, and communicate with each other to ensure that they are synchronized on all transactions.

Local Area Network

确保良好的性能和同步the system, all the components of your Artifactory HA installation must be installed on the same high-speed LAN.

In theory, Artifactory HA could work over a Wide Area Network (WAN), however in practice, network latency makes it impractical to achieve the performance required for high availability systems.

Shared Network File System or Cloud Storage

Artifactory HA requires a shared file system to store cluster-wide configuration and binary files. The configuration files must be maintained on an NFS(versions 3 or 4)file system that supportsconcurrent requestsandfile locking, while your binaries may be on the sameNFSor on云存储(currently, Amazon S3 and Google Cloud Storage are supported).

Database

Artifactory HA requires an external database, and currently supports MySQL, Oracle, MS SQL and PostgreSQL. For details on how to configure any of these databases please refer toConfiguring the Database.

Since Artifactory HA contains multiple Artifactory cluster nodes, your database must be powerful enough to service all the nodes in the system. Moreover, your database must be able to support the maximum number of connections possible from all the Artifactory cluster nodes in your system.

If you are replicating your database you must ensure that at any given point in time all nodes see a consistent view of the database, regardless of which specific database instance they access.Eventual consistency, and write-behind database synchronization is not supported.


Watch the Screencast

This screencast shows how a Jenkins build, resolving artifacts through an Artifactory HA, continues to run uninterrupted even when one of the cluster nodes stops operating.