Skip to end of metadata
Go to start of metadata

Overview

The main feature added in Artifactory 3.1 is support for a High Availability network configuration with a cluster of 2 or more, active/active, read/write Artifactory servers on the same Local Area Network (LAN).

This presents several benefits to your organization and is included withArtifactory Pro Enterprise Value Pack.

Benefits

Maximize Uptime

Artifactory HA redundant network architecture means that there is no single-point-of-failure, and your system can continue to operate as long as at least one of the Artifactory nodes is operational. This maximizes your uptime and can take it to levels of up to "five nines" availability.

Manage Heavy Loads

通过使用冗余阵列of Artifactory server nodes in the network, your system can accommodate larger load bursts with no compromise to performance. With horizontal server scalability, you can easily increase your capacity to meet any load requirements as your organization grows.

Page Contents

Minimize Maintenance Downtime

By using an architecture with multiple Artifactory servers, Artifactory HA lets you perform most maintenance tasks with no system downtime.


Architecture

Artifactory HA architecture presents a Load Balancer connected to a cluster of two or more Artifactory servers that share a common database and Network File System. The Artifactory cluster nodes must be connected through a fast internal LAN in order to support high system performance as well as to stay synchronized and notify each other of actions performed in the system instantaneously. One of the Artifactory cluster nodes is configured to be a "master" node. Its roles are to execute cluster-wide tasks such as cleaning up unreferenced binaries.

JFrog support team is available to help you configure the Artifactory cluster nodes. It is up to your organization's IT staff to configure your load balancer, database and network file system.

Network Topology

Load Balancer

The load balancer is the entry point to your Artifactory HA installation and optimally distributes requests to the artifactory server nodes in your system.

Your load balancer must support session affinity (sticky sessions) and It is the responsibility of your organization to manage and configure it correctly.

The code samples below show some basic examples of load balancer configurations:

Apache load balancer configuration
第一次安装以下模块:LoadModule他aders modules/mod_headers.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_http_module modules/mod_proxy_http.so Then configure as follows:  ServerAdmin admin@test.com ServerName apache-ha-test ServerAlias apache-ha-test.jfrog.local Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/artifactory/" env=BALANCER_ROUTE_CHANGED  # Artifactory server #1 BalancerMember http://10.0.0.32:8081 route=art1 # Artifactory server #2 BalancerMember http://10.0.0.33:8081 route=art2 ProxySet lbmethod=byrequests ProxySet stickysession=ROUTEID  ProxyPreserveHost on ProxyPass /balancer-manager ! ProxyPass / balancer://tomcats/ RewriteEngine On RewriteRule ^/$ /artifactory [R,L]  SetHandler balancer-manager Order deny,allow Allow from 10.0.0 192.168.0  LogLevel warn ErrorLog /var/log/httpd/apache-ha-test.error.log CustomLog /var/log/httpd/apache-ha-test.access.log combined 
nginx load balancer configuration example
http { ... ... ... upstream artifactory { ip_hash; # for stickiness by IP server IP_SERVER_1:8081; server IP_SERVER_2:8081; } server { listen 80; server_name YOUR_SERVER_NAME; ... ... ... rewrite ^/$ http://$host/artifactory/webapp/login.html; location / { proxy_pass http://artifactory; } } }

More details are available on thenginx website.

Artifactory Server Cluster

Each Artifactory server in the cluster receives requests routed to it by the load balancer. All servers share a common database and NFS mount, and communicate with each other to ensure that they are synchronized on all transactions.

Local Area Network

To ensure good performance and synchronization of the system, all the components of your Artifactory HA installation must be installed on the same high-speed LAN.

In theory, Artifactory HA could work over a Wide Area Network (WAN), however in practice, network latency makes it impractical to achieve the performance required for high availability systems.

Artifactory HA Architecture

Shared Network File System

Artifactory HA requires a shared file system to store cluster-wide configuration and binary files. Artifactory HA requires that your shared file system supportsconcurrent requestsandfile locking.

Currently, the only shared file system that has been certified to work with Artifactory HA and is supported isNFS(版本3和4).

Mounting the NFS from Artifactory HA nodes

When mounting the NFS on the client side, make sure to add the following option for themountcommand:

lookupcache=none

This ensures that nodes in your HA cluster will immediately see any changes to the NFS made by other nodes..

Database

Artifactory HA requires an external database, and currently supports MySQL, Oracle, MS SQL and PostgreSQL. For details on how to configure any of these databases please refer toChanging the Default Storage.

Since Artifactory HA contains multiple Artifactory cluster nodes, your database must be powerful enough to service all the nodes in the system. Moreover, your database must be able to support the maximum number of connections possible from all the Artifactory cluster nodes in your system.

If you are replicating your database you must ensure that at any given point in time all nodes see a consistent view of the database, regardless of which specific database instance they access.Eventual consistency, and write-behind database synchronization is not supported.