Cloud customer?
Start for Free>
Upgrade in MyJFrog >
What's New in Cloud >





You are viewing an old version of this page. View thecurrent version.

Compare with CurrentView Page History

« PreviousVersion 27Next »

Overview

JFrog products can be configured for High Availability with a cluster of 2 or more active/active nodes on the same Local Area Network (LAN).

Supported with anEnterprise license, an HA configuration provides the following benefits:

Optimal Resilience

Maximize your uptime. In case one or more nodes is unavailable or down for upgrade, the load is shared between the remaining nodes, ensuring optimal resilience and uptime.

Improved Performance with Load Balancing

Scale your environment with as many nodes as you need. All cluster nodes in an HA configuration are synchronized, and jointly share and balance the workload between them using a load balancer. When a node becomes unavailable, the cluster will automatically spread the workload across the other remaining node(s).

Managed Heavy Loads

Accomodate larger load bursts with no compromise to performance. With horizontal server scalability, you can easily increase your capacity to meet any load requirements as your organization grows.

Always being Synchronized

无缝和立即同步数据,配置uration, cached objects and scheduled job changes across all cluster nodes.

Installing and Upgrading to HA

For additional information refer to theinstallationandupgradesections.

Page Contents


HA Architecture

The HA architecture consists of 3 layers: load balancer, application and common resources.

Getting Help

JFrog support team is available to help you configure the Artifactory cluster nodes. It is up to your organization's IT staff to configure your load balancer, database and object store.

Load Balancer

The load balancer is the entry point to your HA installation and optimally distributes requests to the server nodes in your system.It is the responsibility of your organization to manage and configure it correctly.

Use Artifactory's reverse proxy generator

You may generate configuration snippets for Apache HTTPD and Nginx backed Artifactory High Availability clusters with the built-inReverse Proxy generator- it will detect the existing server nodes and add them to the generated configuration file.

The code samples below show some basic examples of load balancer configurations:

JFrog Artifactory

First install the following modules: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_http_module modules/mod_proxy_http.so Then configure as follows:  ServerAdmin admin@frogs.com ServerName artifactory.www.si-fil.com ServerAlias *.www.si-fil.com  # Artifactory server #1 BalancerMember http://IP_SERVER_1:PORT route=art1 # Artifactory server #2 BalancerMember http://IP_SERVER_1:PORT route=art2 ProxySet lbmethod=byrequests  ProxyPreserveHost on ProxyPass /balancer-manager ! ProxyPass / balancer://tomcats/ ProxyPassReverse /artifactory https:///artifactory RewriteEngine On RewriteRule ^/$ /artifactory [R,L] LogLevel warn ErrorLog /var/log/httpd/apache-ha-test.error.log CustomLog /var/log/httpd/apache-ha-test.access.log combined 
http { ... ... ... upstream artifactory { server IP_SERVER_1:8081; server IP_SERVER_2:8081; } server { listen 80; server_name YOUR_SERVER_NAME; ... ... ... rewrite ^/$ http://$host/artifactory/webapp; location / { proxy_pass http://artifactory; } } }

More details are available on thenginx website.

JFrog Xray

############################################################################################################ ############################################################################################################ #Xray Reverse Proxy https+http with Nginx ############################################################################################################ ssl_certificate /etc/nginx/certs/xray.cert; ssl_certificate_key /etc/nginx/certs/xray.key; ssl_session_cache shared:SSL:1m; ssl_prefer_server_ciphers on; ## server configuration upstream xray { server IP_SERVER_1:8000; server IP_SERVER_2:8000; } server { listen 443 ssl; listen 80 ; server_name ; if ($http_x_forwarded_proto = '') { set $http_x_forwarded_proto $scheme; } ## Application specific logs ## access_log /var/log/nginx/-access.log timing; ## error_log /var/log/nginx/-error.log; chunked_transfer_encoding on; client_max_body_size 0; location / { proxy_read_timeout 900; proxy_pass_header Server; proxy_cookie_path ~*^/.* /; proxy_pass http://xray; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; } }

More details are available on thenginx website.




Application

HA presents a cluster of two or more nodes that share common resources. Each cluster node runs all microservices, described in theSystem Architecture.

Server Cluster

Each server in the cluster can be comprised of multiple nodes. Each node will receives requests routed to it by the load balancer.All nodes share a set of common resources, like a database and a files store. Clustering those common resources is not part of the server cluster, and you should follow the documentation of your selected common resources to set that. The common resources are also used by the server cluster nodes to communicate with each other to ensure that they are synchronized on all transactions.

Local Area Network

To ensure good performance and synchronization of the system, all the components of your HA installation must be installed on the same high-speed LAN.

In theory, HA could work over a Wide Area Network (WAN), however in practice, network latency makes it impractical to achieve the performance required for high availability systems.

Common Resources

Each service requires a filestore and a database services. The following table summarizes the options for storing binaries and shared resources.


JFrog Artifactory JFrog Xray JFrog Mission Control JFrog分布
Filestore
  • Local file systemin which binaries are stored with redundancy using a binary provider which manages synchronizing files between the cluster nodes according to the redundancy defined.
  • Cloud storage
    Amazon S3 and Google Cloud Storage
  • Network File System(NFS)

The storage used by Xray is not a common resource. Only node specific files, such as configuration and temporary files, are saved to the disk.

Local file systemis used to store information specific to the node. The main file that is used here is themc.keywhich is used to encrypt the database content.

This need to be synchronized between the nodes manually.


Database/ Third Party Application
  • Derby
    Default internal database bundled into the Artifactory installation.

Artifactory HA requires an external database, which is fundamental to management of binaries and is also used to store cluster wide configuration files.

You canconfigure your own databasefrom the following list:

  • MySQL
  • Oracle
  • MS SQL
  • PostgreSQL
  • MariaDB

Artifactory HA requires an external database, which is fundamental to management of binaries and is also used to store cluster wide configuration files.

Since Artifactory HA contains multiple Artifactory cluster nodes, your database must be powerful enough to service all the nodes in the system. Moreover, your database must be able to support the maximum number of connections possible from all the Artifactory cluster nodes in your system.

If you are replicating your database you must ensure that at any given point in time all nodes see a consistent view of the database, regardless of which specific database instance they access.Eventual consistency, and write-behind database synchronization is not supported.

  • RabbitMQ(Microservice Communication and Messaging)

    Automatically installed.

    RabbitMQ is installed as part of the Xray installation for every node and in case of HA architecture, it uses queue mirroring between the different RabbitMQ nodes.

    Xray has multiple flows, such as scanning, impact analysis, and database sync. These flows require processing completed by the different Xray services listed above. Flows contain multiple steps that are completed by the Xray services.

    Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between the services.
  • PostgreSQL(Components Graph Database)
    Every artifact and build indexed by Xray is broken down into multiple components.

    These components and the relationships between each other are represented in a checksum based components graph.
    Xray uses PostgreSQL to store and query this components graph.
  • PostgreSQL

An external database is required, which is fundamental to management of Mission Control database and is also used to store cluster wide configuration files. Currently PostgreSQL is supported, and any change to configuration requires restarting all Mission Control nodes for changes to take effect.

  • PostgreSQL

Distribution HA requires an external database, which is fundamental to management of binaries and is also used to store cluster wide configuration files. Currently PostgreSQL is supported, and any change to configuration only requires restarting a single Distribution node for changes to take effect for the whole Distribution cluster.

  • No labels
Copyright © 2023 JFrog Ltd.