Cloud customer?
Start for Free>
Upgrade in MyJFrog >
What's New in Cloud >





Overview

This page outlines the system requirements for setting up and running theJFrogproduct servers for each product, including:

  • Recommended and required hardware
  • Supported platforms
  • Network requirements
  • Java requirements
  • Supported browsers
  • Other special requirements

Separate server for each component

While not a strict requirement, it is strongly recommended to run each JFrog product on its own separate server.

Use a dedicated server for Xray with no other software running toalleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.

Page Contents


Requirements Matrix

The following tables describe what is required for each single-service node. In a High Availability configuration, a single-service node represents each of the HA server instances.

Supported Platforms:

产品 Debian Centos* RHEL Ubuntu Windows服务器 Helm Charts SLES
Artifactory 9.x, 10.x 7.x 7.x, 8.x

18.04, 20.04


2016 or 2019 2.x, 3.x 12 SP5
Insight 9.x, 10.x 7.x 7.x, 8.x 18.04, 20.04 (error) 2.x, 3.x 12 SP5
Mission Control 9.x, 10.x 7.x 7.x, 8.x 18.04, 20.04 (error) 2.x, 3.x 12 SP5
Xray 9.x, 10.x 7.x 7.x, 8.x 18.04, 20.04 (error) 2.x, 3.x (error)
Distribution 10.x 7.x 7.x, 8.x 18.04, 20.04 (error) 2.x, 3.x (error)
Pipelines (error) 7.x 7.x, 8.x 18.04, 20.04 Build nodes only 2.x, 3.x (error)

Breaking Change Affecting RPM/Yum/Linux Archive Installations on Centos 7.x*

As part of our commitment to our customers to maintain the security and reliability of your JFrog Platform, from Artifactory version v7.43.x, JFrog Artifactory will officially run with JDK 17 and Tomcat 9.x on all installation types. Note that JDK 17 and Tomcat 9.x are not supported on all Centos 7.x versions.

CentOS 8.x Support

CentOS 8.x reached its end-of-life in December 2021. CentOS 8.x support for JFrog products will be deprecated by the end of June 2022.

SLES 12 SP5 supports Docker Compose installation for all products except for pipelines.

Reserving Ports for Services

As JFrog adds additional services to the JFrog Platform portfolio, there is a need to "reserve" ports for the Platform to ensure that the service works properly. To this end, JFrog recommends reserving ports8000-8100 (this is addition tothe existing internal ports documented below).

Minimum System and Application Requirements

产品 Processor 内存 Storage External Network Port Internal Network Ports (default) Databases/Third Party Applications
Artifactory(Version 7.0 and above)
  • 64 bit OS/JVM
  • Based on expected amount of active clients:
  • Based on expected amount of active clients:


If the Mission Control microservice is enabled, you will need to take into account a +100MB memory footprint increase. For more information, seeMigrating from Mission Control to Insight.

Based on expected artifact storage volume. Fast disk with free space that is at least 3 times the total size of stored artifacts.

  • 8081
  • 8082
  • 8081 for Artifactory
  • 8040 and 8045 for Access
  • 8048 and 9092 for the Replicator
  • 8070 for Web
  • 8086 for Metadata
  • 8082, 8046, 8047, 8049, and 8091 for the Router
  • 8061, and 8062 for Events
  • 8071 and 8072 for Integrations
  • 8030 for JFConnect
  • 8036 for Observability HTTP port
  • 8037 for gRPC port

Supported:
Oracle(12.2, 18, 19)

Artifactory 7.0 to 7.6:MySQL5.7.

Artifactory 7.7 and above:MySQL5.7. and 8.x.

MySQL5.6 is no longer supported as it has reached its EOL; users are requested to upgrade to the above versions of MySQL.


Microsoft SQL Server
(2012 and above)

Artifactory does not Kerberos SSO for MS SQL.

PostgreSQL:

  • 9.5 (EOL)
  • 9.6 (EOL)
  • 10.x
  • 11.x
  • 12.x
  • 13.x

MariaDB
(10.2.9-10.4, 10.5.9) Note:Refrain from using MariaDB 10.5.x versions other than version 10.5.9 due to thisknown issue).

Insight
(Version 1.0.1 and above)

(You can install and use Insight only if you use Artifactory 7.27.3 or later)

Insight will only display Xray charts if you have Xray version 3.33.3 or later. Upgrade Xray to version 3.33.3 to view the Xray charts.

Min requirements. Assuming running with an external database.
Actual values may change based on the amount of data in your application.

By default, Insight stores one year of storage information, one year of metrics information, one year of events information (like garbage collection details), and one day of requests information. The following sizing information is provided based on the default configuration.

If you want to use Insight in a high-availability configuration, you must have three or more odd number of nodes.

  • 8082
  • 8080 for Insight Server
  • 8087 for Insights
  • 9200 for Elasticsearch (if bundled)
  • 5432 for PostgreSQL (if bundled)
  • 8082, 8046, 8047 and 8049 for the Router

Required:

PostgreSQL

  • 10.x
  • 11.x
  • 12.x
  • 13.x

Elasticsearch

  • 7.14.1 (for Insight 1.0.1 to 1.1.3)
  • 7.15.1 (for Insight 1.2.3 and above)


Up to 500 Artifactory Repositories
4 cores

5 GB

  • Elasticsearch Service - 3 GB (default minimum heap memory allocation - 2 GB, default maximum heap memory allocation - 2 GB)
  • Insight Server - 1 GB (default minimum heap memory allocation - 400 MB, default maximum heap memory allocation - 400 MB)
  • Insight Scheduler - 1 GB (default minimum heap memory allocation - 400 MB, default maximum heap memory allocation - 400 MB)

120 GB

  • Elasticsearch - 100 GB
  • PostgreSQL DB - 10 GB
  • Insight Installation and logs - 10 GB
500 to 1500 Artifactory Repositories
4 cores

8 GB

  • Elasticsearch Service - 4 GB (default minimum heap memory allocation - 2 GB, default maximum heap memory allocation - 2 GB)
  • Insight Server - 2 GB (default minimum heap memory allocation - 800 MB, default maximum heap memory allocation - 800 MB)
  • Insight Scheduler - 2 GB (default minimum heap memory allocation - 800 MB, default maximum heap memory allocation - 800 MB)

220 GB

  • Elasticsearch - 200 GB
  • PostgreSQL DB - 10 GB
  • Insight Installation and logs - 10 GB
1500 to 2500 Artifactory Repositories
4 cores

14 GB

  • Elasticsearch Service - 6 GB (default minimum heap memory allocation - 3 GB, default maximum heap memory allocation - 3 GB)
  • Insight Server - 4 GB (default minimum heap memory allocation - 1.6 GB, default maximum heap memory allocation - 1.6 GB)
  • Insight Scheduler - 4 GB (default minimum heap memory allocation - 1.6 GB, default maximum heap memory allocation - 1.6 GB)

420 GB

  • Elasticsearch - 400 GB
  • PostgreSQL DB - 10 GB
  • Insight Installation and logs - 10 GB
More Than 2500 Artifactory Repositories
Contact JFrog Support for sizing requirements.

Mission Control
(Version 4.0 to 4.7.x)

Min requirements. Assuming running with an external database.
Actual values may change based on the amount of data in your application.

4 cores 12 GB 100 GB
  • 8082
  • 8080 for Mission Control Server
  • 8085 for Scheduler
  • 8087 for Executor
    (Obsolete from Mission Control version 4.5 and above)
  • 8087 for Insights
  • 9200 for Elasticsearch (if bundled)
  • 5432 for PostgreSQL (if bundled)
  • 8082, 8046, 8047 and 8049 for the Router

Required:

PostgreSQL

  • 9.5 (EOL)
  • 9.6 (EOL)
  • 10.x
  • 11.x
  • 12.x
  • 13.x

Elasticsearch

  • 6.6.x (for Mission Control versions 4.0 and 4.2)
  • 7.6.1 (for Mission Control versions 4.3.2 to 4.5.0)
  • 7.8.0 and 7.8.1(for Mission Control version 4.6.0)
  • 7.10.2 (for Mission Control version 4.7.0 to 4.7.7)
  • 7.12.1 (for Mission Control version 4.7.8)
  • 7.13.2 (for Mission Control version 4.7.9)
  • 7.14.1 (for Mission Control version 4.7.15)

Xray
(Version 3.0 and above)

The requirements presented here are based on the size of your environment.

Use a dedicated server for Xray with no other software running toalleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.

Up to 100k indexed artifacts, and 1K artifacts/builds per day

8082

  • 8000 for Xray Server
  • 7000 for Analysis
  • 7002 for Indexer
  • 7003 for Persist
  • 8082, 8046, 8047 and 8049 for the Router
  • 4369, 5671, 5672, 15672, and 25672 for RabbitMQ
  • 5432 for PostgreSQL (if bundled)
  • 8036 for Observability HTTP port
  • 8037 for gRPC port

Required:

PostgreSQL:

  • 9.5 (EOL)
  • 9.6 (EOL)
  • 10.x
  • 11.x
  • 12.x
  • 13.x - forXray version 3.18


Supported:
从3.82版本Ubuntu 20.04


Xray and DB: 6 CPU

Xray and DB: 24 GB

Xray and DB: 500 GB (SSD, 3000 IOPS)

Up to 1M indexed artifacts, and 10k artifacts/builds per day
  • Xray (x2 nodes): 4 CPU
  • DB: 8 CPU
  • Xray (x2 nodes): 8 GB
  • DB: 32 GB
  • Xray (x2 nodes): 300 GB
  • DB: 500 GB (SSD, 3000 IOPS)
Up to 2M indexed artifacts, and 20k artifacts/builds per day
  • Xray (x3 nodes): 6 CPU
  • DB: 16 CPU
  • Xray (x3 nodes): 12 GB
  • DB: 32 GB
  • Xray (x3 nodes): 300 GB
  • DB: 1 TB (SSD, 3000 IOPS)
Up to 10M indexed artifacts, and 50k artifacts/builds per day
  • Xray (x3 nodes): 8 CPU
  • DB: 16 CPU
  • Xray (x3 nodes): 24 GB
  • DB: 64 GB
  • Xray (x3 nodes): 300 GB
  • DB: 2.5 TB (SSD, 3000 IOPS)
Over10M indexed artifacts, and 50k artifacts/builds per day
Contact JFrog Support for sizing requirements.

The number of nodes above refers to High Availability (HA) setups, not Disaster Recovery.

Distribution(Version 2.0 and above)

Min requirements. Assuming running with an external database.
Actual values may change based on the amount of data in your application.

4 CPU 8 GB

200 GB SSD storage

8082

  • 8080 for the Distribution Server
  • 8082, 8046, 8047 and 8049 for the Router
  • 6379 for Redis
  • 5432 for PostgreSQL (if bundled)
  • 8036 for Observability HTTP port
  • 8037 for gRPC port

Required:
PostgreSQL:

  • 9.5 (EOL)
  • 9.6 (EOL)
  • 10.x
  • 11.x
  • 12.x
  • 13.x

Supported:
Redis 6.0.5

Pipelines(Version 1.0 and above)

Application

4 cores

Build Nodes

2 cores for Linux build nodes

4 cores for Windows 19 build nodes

Application

8 GB

Build Nodes

3.75 GB for Linux build nodes

8 GB for Windows 19 build nodes

100 GB
  • 8082
  • 30001
  • 30200 (Pipelines 1.0.0 - 1.10.0 only)
  • 8082 for Pipelines API
  • 30001 for Pipelines WWW (UI)
  • 22 for SSH access to the instance
  • 5432 for Database (PostgreSQL) access
  • 30200 for RabbitMQ
  • 30201 for RabbitMQ Admin
  • 30100 for Vault
  • 6379、16379、6380、16380、6381、16381进行复述,Cluster

Pipelines does not support external Redis and you cannot use TLS for Redis.

Required:
PostgreSQL:

  • 9.5 (EOL)
  • 9.6 (EOL)
  • 10.x
  • 11.x
  • 12.x
  • 13.x

RabbitMQ: 3.8.3



ARM64 Support

From version 7.41.4,Artifactory supports installation on ARM64 architecture through Helm and Docker installations. You must set up an external database as the Artifactory database since Artifactory does not support the bundled database with the ARM64 installation. Artifactory installation pulls the ARM64 image automatically when you run the Helm or Docker installation on the ARM64 platform.

Currently, ARM64 support is not available for other JFrog products.


Network

Artifactory, Xray, and other JFrog products all need to be set with static IP addresses. These services also need to be able to communicate directly with each other over the same LAN connection. Hosting these services in geographically distant locations may cause health checks to temporarily fail. Ensure the ports are open and no firewalls block communications between these services.


Java

Java-based products (Artifactory, Distribution, Insight, Mission Control)must run withJDK 17+.TheJDK is already bundled into the applications.

  • From Distribution 2.13.2, therequiredJDK version is JDK 17.
  • From Artifactory 7.43.x, the supported JDK version is JDK 17, which will be bundled into Artifactory.

JDK 11 is no longer supported.

JVM Memory Allocation

While not a strict requirement, we recommend that you modify the JVM memory parameters used to run Artifactory.

You should reserve at least 512MB for Artifactory.The larger your repository or number of concurrent users, the larger you need to make the -Xms and -Xmx values accordingly.

Set your JVM parameters in thesystem.yaml configuration file.

Configuring the JVM parameters
shared: extraJavaOpts: "-Xms512m -Xmx2g"

Browsers

Artifactory has been tested with the latest versions of:

  • Chrome
  • Firefox
  • Safari (for Mac)
  • Edge (Chromium-based versions)

System Time Synchronization

The JFrog Platform requires time synchronization between all JFrog services within the same Platform.

Unsynchronised services may cause issues during authentication and token verification.


码头工人Requirements

对于码头工人,码头工人组合安装,JFrogservices require Docker v18 and above (for Pipelines 18.09 and above) and Docker Compose v1.24 and up to be installed on the machine on which you want to run on.

For install instructions, please refer to the码头工人and the码头工人Composedocumentation.



Helm Chart Requirements

For Helm Charts installations, JFrog services requires the following prerequisites/requirements:

  • Kubernetes 1.12+ (for installation instructions, seeKubernetes installation)
  • Kubernetes cluster with:
    • Dynamic storage provisioning enabled
    • Default StorageClass set to persistent storage
  • Kubectlinstalled and set up to use the cluster
  • Helmv3 installed

JFrog validates compatibility with the core Kubernetes distribution. Since Kubernetes distribution vendors may apply additional logic or hardening (for example, OpenShift and Rancher) JFrog Platform deployment with such platform vendors might not be fully supported.


Special Requirements

Artifactory

Working with Very Large Storage

In most cases,our recommendation is for storage that is at least 3 times the total size of stored artifacts in order to accommodatesystem backups. However, when working with a very large volume of artifacts, the recommendation may vary greatly according to the specific setup of your system.Therefore, when working with over10 Tbof stored artifacts, please contactJFrog supportwho will work with you to provide a recommendation for storage that is customized to your specific setup.

Allocated storage space may vary

Xray downloads and then deletes fetched artifacts after indexing. However, in order to have more parallel indexing processes, and thereby more temporary files at the same time would require more space.

This is especially applicable for large BLOBs such as Docker images.

Installation Recommendation

While Artifactory can use aNetworked File System (NFS)for its binary storage, you should do not install the application itself on an NFS. The Artifactory application needs very fast, reliable access to its configuration files. Any latency from an NFS will result in poor performance when the application fails to read these files. Therefore, install Artifactory on a local disk mounted directly to the host.

To use an NFS to store binaries, use the "file-system" binarystore.xml configuration with the additional"" setting.

Xray

Node Recommendations

Use a dedicated node for Xray with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.

Storage Recommendations

In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Since the local storage used for Xray services are temporary, it does not require replication between the different nodes in a multi-node/HA deployment.

File Handle Allocation Limit

Avoid performance bottlenecks

In the process of deep recursive scan in which Xray indexes artifacts and their dependencies (metadata), Xray needs to concurrently manage many open files. The default maximum number of files that can be opened concurrently on Linux systems is usually too low for the indexing process and can therefore cause a performance bottleneck. For optimal performance, we recommend increasing the number of files that can be opened concurrently to 100,000 (or the maximum your system can handle) by following the steps below.

Use the following command to determine the current file handle allocation limit:

cat /proc/sys/fs/file-max

Then, set the following parameters in your/etc/security/limits.conffile to thelowerof 100,000 or the file handle allocation limit determined above.

The example shows how the relevant parameters in the/etc/security/limits.conffile are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.

root hard nofile 100000 root soft nofile 100000 xray hard nofile 100000 xray soft nofile 100000 postgres hard nofile 100000 postgres soft nofile 100000
Copyright © 2023 JFrog Ltd.