JFrog Help Center

Our new portal is coming soon!
Documentation + Knowledge Base





JFrog Help Center - A new knowledge experience is coming your way soon!



Overview

This page provides a guide for the different ways you can install and configure JFrog Mission Control, single node and high availability. Additional information on high availability can be foundhere.

Mission Control is Moving to Artifactory as a Service

From JFrog Artifactory version 7.27.3, Mission Control has been integrated directly into Artifactory as a service. You no longer need to install Mission Control to use the features it provides. You must enable the service in Artifactory through the Artifactory system YAML file.The metrics capabilities that were provided Mission Control will now be provided through JFrog Insight. To learn more about how to install Insight, seeInstalling Insight.

To learn more about how Mission Control has been integrated into Artifactory and to migrate to Mission Control microservice, seeMigrating Platform Deployments and License Buckets.

You must install JFrog Insights to use trends and charts after you migrate to Mission Control microservice. For more information, seeMigrating from Mission Control to Insight.

You can still install Mission Control with Artifactory version 7.27.3 and later, until the end-of-life of Mission Control as a standalone product. Mission Control will continue to receive critical fixes and security updates.

JFrog Subscription Levels

SELF-HOSTED
ENTERPRISE X
ENTERPRISE+
Page Contents



Before You Begin

When installing Mission Control, you must run the installation as a root user or providesudo access to a non-root user.

Admin Permissions

You will need to have admin permissions on the installation machine in the following cases

  • Native installer - always requires admin permissions
  • Archive installer - requires admin permissions only during installation
  • Docker installer - does not require admin permissions

Use a dedicated server for Mission Control with no other software running to alleviate performance bottlenecks, avoid port conflicts, and avoid setting uncommon configurations.

Supported Platforms for Mission Control

Debian Centos RHEL Ubuntu Windows Server Helm Charts SLES
8.x, 9.x, 10.x 7.x, 8.x 7.x, 8.x 16.04, 18.04, 20.04 (error) 2.x, 3.x 12 SP 5

Mission Control Requirements

Version 4.0 to 4.7.x

  • Min requirements. Assuming running with an external database.
  • Actual values may change based on the amount of data in your application.
  • You can install and use Mission Control only if you use an Artifactory 7.26.0 or lower.
  • Mission Control functionality has been integrated into Artifactory and from Artifactory 7.27.3 and later
Processor Memory Storage External Network Port Internal Network Ports (default) Databases/Third Party Applications
4 cores 12 GB 100 GB
  • 8082
  • 8080 for Mission Control Server
  • 8085 for Scheduler
  • 8087 for Executor
    (Obsolete from Mission Control version 4.5 and above)
  • 8087 for Insights
  • 9200 for Elasticsearch (if bundled)
  • 5432 for PostgreSQL (if bundled)
  • 8082, 8046, 8047 and 8049 for the Router

Required:

PostgreSQL:

  • 9.5 (EOL)
  • 9.6 (EOL soon)
  • 10.x
  • 11.x
  • 12.x
  • 13.x

Elasticsearch 6.6.x
(for Mission Control versions 4.0 and 4.2)

Elasticsearch 7.6.1
(for Mission Control versions 4.3.2 to 4.5.0)

Elasticsearch 7.8.0 and 7.8.1 (for Mission Control version 4.6.0)

Elasticsearch 7.10.2. (for Mission Control version 4.7.0 to 4.7.7)

Elasticsearch 7.12.1. (for Mission Control version 4.7.8)

Elasticsearch 7.14.1. (for Mission Control version 4.7.15)

System Architecture

To learn about the JFrog Platform Deployment, seeSystem Architecture.

Installing Mission Control

Before installing Mission Control 4.x,you must first installJFrog Artifactory 7.x.


Single Node Installation

The following installation methods are supported:

Interactive Script Installation (recommended)

All install types are supported, including: Docker Compose, Linux Archive, RPM, and Debian.

The installer script provides you an interactive way to install Mission Control and its dependencies. All install types are supported. This installer should be used for Docker Compose.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc--.tar.gz cd jfrog-mc--

    OS user permissions for Linux archive

    When running Mission Control, the installation script creates a user called jfmc by default必须运行和执行的权限吗installation directory.

    It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as/opt.

    Linux archive
    mv jfrog-mc--linux.tar.gz /opt/ cd /opt tar -xf jfrog-mc--linux.tar.gz mv jfrog-mc--linux mc cd mc

    .env file included within the Docker-Compose archive

    This .env file is used bydocker-composeand is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  3. Run the installer script.

    The script prompts you with a series of mandatory inputs, including thejfrogURL(custom base URL)andjoinKey. Enter N when the script prompts you whether or not to join a cluster. Enter Yes only if you are adding secondary Mission Control nodes to a cluster.

    Docker Compose
    ./config.sh
    RPM/DEB
    ./install.sh

    Prerequisites for Linux archive

    Referprerequisites for Mission Control in Linux Archivebefore running install script.

    Linux archive
    ./install.sh --user  --group  -h | --help : [optional] display usage -u | --user : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable -g | --group : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable
  4. Validate and customize the product configuration(optional), including the third party dependencies connection details and ports.
  5. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemv
    service mc start|stop
    Docker Compose
    cd jfrog-mc--compose docker-compose -p mc up -d docker-compose -p mc ps docker-compose -p mc down

    You can install and manage Mission Control as a service in a Linux archive installation. Referstart Mission Control section under Linux Archive Manual Installationfor more details.

    Linux archive
    mc/app/bin/mc.sh start|stop
  6. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI.
  7. Check the Mission Control log.

    tail - f $ JFROG_HOME / mc/var/log/console.log

    Configuring the Log Rotation of the Console Log

    Theconsole.logfile can grow quickly since all services write to it. This file is not log rotated for Darwin installations. Learn more on how toconfigure the log rotation.

Manual Linux Archive Installation

  1. Download Mission Control.

  2. Extract the contents of the compressed archive underJFROG_HOMEand move it intomcdirectory.

    tar -xvf jfrog-mc--linux.tar.gz mv jfrog-mc--linux mc
  3. Install PostgreSQL by following the steps detailed inInstalling PostgreSQL.

    PostgreSQL is required andmust be installed beforecontinuing with the next installation steps. Set your PostgreSQL connection details in theShared Configurationssection of the美元JFROG_HOME / mc / var / etc /系统。yamlfile.

  4. Prepare for the Elasticsearch installation by increasing the map count. For more information, see theElastic Search documentation.

    sudo sysctl -w vm.max_map_count=262144

    To make this change permanent, remember to update thevm.max_map_countsetting in/etc/sysctl.conf.

  5. Install Elasticsearch. Instructions to install Elasticsearch are availablehere.

    You can install the package available at /mc/app/third-party/elasticsearch/elasticsearch-oss-.tar.gzor you can download a compatible version of Elasticsearch from thispage.

    1. Install Search Guard. The Search Guard package can be located in the extracted contents at/mc/app/third-party/elasticsearch/search-guard-.tar.gz. For installation steps, refer to theSearch Guard documentation.

      Important

      Youmustinstall the Search Guard plugin to ensure secure communication with Elasticsearch.


      1. Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch.
        The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.

        /mc/app/third-party/elasticsearch/elasticsearch-/plugins/search-guard-7/tools/hash.sh -p  #This will output a hashed password (), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.

        : hash: "" backend_roles: - "admin" description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-/plugins/search-guard-7/sgconfig/.

    2. Enable the anonymous access to_cluster/healthendpoint. This is required to check the health of Elasticsearch cluster.
      Enable the anonymous auth in this filesg_config.ymlat <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-/plugins/search-guard-7/sgconfig/.

      sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true
    3. Map the anonymous usersg_anonymousto the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at <JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-/plugins/search-guard-7/sgconfig/.

      sg_anonymous: backend_roles: - sg_anonymous_backendrole
    4. Add the following snippet to the end of this filesg_roles.ymllocated at<JFROG_HOME>/mc/app/third-party/elasticsearch/elasticsearch-/plugins/search-guard-7/sgconfig/.

      sg_anonymous: cluster_permissions: - cluster:monitor/health
  6. Add the following in the shared section of美元JFROG_HOME / mc / var / etc /系统。yamlfile. Refer toShared Configurationssection.

    shared: elasticsearch: external: true url: : username:  password: 

    You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control.

    If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.

    shared: elasticsearch: url: : external: true aes: signed: true serviceName:  region:  accessKey:  secretKey: 

    If you use the Amazon Elasticsearch Service, you must log in to the service using your Amazon AWS credentials.

  7. Start PostgreSQL and Elasticsearch.

  8. 定制的产品配置.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details (optional).
    3. Set any additional configurations (for example: ports, node id) using theMission Control System YAML.

  9. Start and manage the Mission Control service as the user who extracted the tar.
    As a process

    Daemon Process
    /mc/app/bin/mc.sh start

    Manage the process.

    /mc/app/bin/mc.sh start|stop|status|restart

    As a service,Mission Control is packaged as an archive file and an install script that can be used to install it as a service running under a custom user. Currently supported on Linux systems.

    OS User Permissions

    When running Mission Control as a service, the installation script creates a user calledjfmc(by default)必须运行和执行的权限吗installation directory.

    It is recommended to extract the Mission Control download file into a directory that gives run and execute permissions to all users such as/opt.

    To install Mission Control as a service,execute the following command as root:

    User and group can be passed through/mc/var/etc/system.yamlasshared.userandshared.group. This takes precedence over values passed through command line on install.

    /mc/app/bin/installService.sh --user  --group  -u | --user : [optional] (default: jfmc) user which will be used to run the product, it will be created if its unavailable -g | --group : [optional] (default: jfmc) group which will be used to run the product, it will be created if its unavailable

    The user and group will be stored in the/mc/var/etc/system.yamlat the end of the installation.
    To manage the service, use thesystemdorinit.dcommands depending on your system.

    Using systemd
    systemctl  mc.service
    Using init.d
    service mc 
  10. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI
  11. Check the Mission Control log.

    tail - f $ JFROG_HOME / mc/var/log/console.log

Manual RPM Installation

The RPM installation bundles Mission Control and all its dependencies. It is provided as native RPM packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.

  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc--rpm.tar.gz cd jfrog-mc--rpm
  3. Install Mission Control. You must run as a root user.

    rpm -Uvh --replacepkgs ./mc/mc.rpm
  4. InstallPostgreSQLand start the PostgreSQL service.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in theShared Configurationssection of the美元JFROG_HOME / mc / var / etc /系统。yamlfile.

  5. Install Elasticsearch. Instructions to install Elasticsearch are availablehere.
    You can install the package available atjfrog-mc--rpm/third-party/elasticsearch/elasticsearch-oss-.tar.gzor you can download a compatible version of Elasticsearch from thispage.

    When connecting an external instance of Elasticsearch to Mission Control, add the following flag in theShared Configurationsof美元JFROG_HOME / mc / var / etc /系统。yamlfile.

    shared: elasticsearch: external: true


    1. Install Search Guard. The Search Guard package can be located in the extracted contents atjfrog-mc--rpm/third-party/elasticsearch/search-guard-.tar.gz. For installation steps, refer to theSearch Guard documentation.

      Important

      Youmustinstall the Search Guard plugin to ensure secure communication with Elasticsearch.


      1. Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch.
        The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.

        /etc/elasticsearch /插件/search-guard-7/tools/hash.sh -p  #This will output a hashed password (), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.

        : hash: "" backend_roles: - "admin" description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at/etc/elasticsearch /插件/search-guard-7/sgconfig/.

    2. Enable the anonymous access to_cluster/healthendpoint. This is required to check the health of Elasticsearch cluster.
      Enable the anonymous auth in this filesg_config.ymlat/etc/elasticsearch /插件/search-guard-7/sgconfig/.

      sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true
    3. Map the anonymous usersg_anonymousto the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at/etc/elasticsearch /插件/search-guard-7/sgconfig.

      sg_anonymous: backend_roles: - sg_anonymous_backendrole
    4. Add the following snippet to the end of this filesg_roles.ymllocated at/etc/elasticsearch /插件/search-guard-7/sgconfig/.

      sg_anonymous: cluster_permissions: - cluster:monitor/health
  6. Add the following in the shared section of美元JFROG_HOME / mc / var / etc /系统。yamlfile. Refer toShared Configurationssection.

    shared: elasticsearch: url: : username:  password: 

    You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control.

    If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.

    shared: elasticsearch: url: : external: true aes: signed: true serviceName:  region:  accessKey:  secretKey: 

    If you use the Amazon Elasticsearch Service, you must log in to the service using your Amazon AWS credentials.

  7. 定制的产品配置.

    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) usingMission Control System YAML.

  8. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemvOS
    service mc start|stop|status|restart
  9. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI
  10. Check the Mission Control log.

    tail - f $ JFROG_HOME / mc/var/log/console.log

Manual Debian Installation

The Debian installation bundles Mission Control and all its dependencies. It is provided as native Debian packages, where Mission Control and its dependencies must be installed separately. Use this, if you are automating installations.

  1. Download Mission Control.
  2. Extract the contents of the compressed archive, and go to the extracted folder:

    tar -xvf jfrog-mc--deb.tar.gz cd jfrog-mc--deb
  3. Install Mission control.You must run as a root user.

    dpkg -i ./mc/mc.deb
  4. InstallPostgreSQL.

    PostgreSQL is required and must be installed before continuing with the next installation steps.

    Set your PostgreSQL connection details in theShared Configurationssection of the美元JFROG_HOME / mc / var / etc /系统。yamlfile.

  5. Install Elasticsearch. Instructions to install Elasticsearch are availablehere.


    You can install the package available atjfrog-mc--deb
    /third-party/elasticsearch/elasticsearch-oss-.tar.gzor you can download a compatible version of Elasticsearch from thispage.

    1. Install Search Guard. The Search Guard package can be located in the extracted contents atjfrog-mc--deb/third-party/elasticsearch/search-guard-.tar.gz. For installation steps, refer to theSearch Guard documentation.

      Important

      Youmustinstall the Search Guard plugin to ensure secure communication with Elasticsearch.


      1. Add an admin user to Search Guard, to ensure authenticated communication with Elasticsearch.
        The Search Guard configuration accepts a hashed password. Use the following command to generate the hash for the password.

        /usr/share/elasticsearch/plugins/search-guard-7/tools/hash.sh -p  #This will output a hashed password (), make a copy of it
      2. Prepare the configuration snippet to add a new(admin) user with the hashed password obtained from previous step.

        : hash: "" backend_roles: - "admin" description: "Insight Elastic admin user"
      3. Paste the above snippet to the end of this file “sg_internal_users.yml” located at/usr/share/elasticsearch/plugins/search-guard-7/sgconfig/.

    2. Enable the anonymous access to_cluster/healthendpoint. This is required to check the health of Elasticsearch cluster.
      Enable the anonymous auth in this filesg_config.ymlat/usr/share/elasticsearch/plugins/search-guard-7/sgconfig/.

      sg_config: dynamic: http: anonymous_auth_enabled: true #set this to true
    3. Map the anonymous usersg_anonymousto the backend role "sg_anonymous_backendrole" in this file "sg_roles_mapping.yml" at/usr/share/elasticsearch/plugins/search-guard-7/sgconfig/.

      sg_anonymous: backend_roles: - sg_anonymous_backendrole
    4. Add the following snippet to the end of this filesg_roles.ymllocated at/usr/share/elasticsearch/plugins/search-guard-7/sgconfig/.

      sg_anonymous: cluster_permissions: - cluster:monitor/health



  6. Add the following in the shared section of美元JFROG_HOME / mc / var / etc /系统。yamlfile. Refer toShared Configurationssection.

    shared: elasticsearch: url: : username:  password: 

    You must set the value of external as true under Elasticsearch configuration in the system.yaml file even if you install Elasticsearch in the same machine as Mission Control.

    If you use Amazon Elasticsearch Service, enter the following in the shared section of the YAML file.

    shared: elasticsearch: url: : external: true aes: signed: true serviceName:  region:  accessKey:  secretKey: 

    If you use the Amazon Elasticsearch Service, you must log in to the service using your Amazon AWS credentials.

  7. 定制的产品配置.

    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) usingMission Control System YAML.

  8. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    systemvOS
    service mc start|stop|status|restart
  9. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI.
  10. Check the Mission Control log.

    Linux
    tail - f $ JFROG_HOME / mc/var/log/console.log

Helm Chart Installation

Deploying Artifactory for Small, Medium or Large Installations

In the chart directory, includes three values files, one for each installation type - small/medium/large. These values files are recommendations for setting resources requests and limits for your installation.You can find the files in thecorresponding chart directory:

  1. Add thehttps://charts.jfrog.ioto your Helm client.

    helm repo add jfrog https://charts.jfrog.io
  2. Update the repository.

    helm repo update
  3. Initiate installation by providing ajoin key and JFrog url as a parameter to the Mission Control chart installation.

    helm upgrade --install mission-control --set missionControl.joinKey= \ --set missionControl.jfrogUrl= --namespace mission-control jfrog/mission-control

    Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. The key must be named join-key.

    kubectl create secret generic my-secret --from-literal=join-key= # Pass the created secret to helm helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control

    In either case, make sure to pass the same join key on all future calls tohelm installandhelm upgrade! This means always passing--set missionControl.joinKey=. In the second, this means always passing--set missionControl.joinKeySecretName=my-secretand ensuring the contents of the secret remain unchanged.

  4. 定制的产品配置(optional)including database, Java Opts, and filestore.

    Unlike other installations, Helm Chart configurations are made to thevalues.yamland are then applied to thesystem.yaml.

    Follow these steps to apply the configuration changes.

    1. Make the changes tovalues.yaml.
    2. Run the command.

      helm upgrade --installmission-control --namespace mission-control -f values.yaml

    3. Restart Mission Control to apply the changes.
  5. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI.

  6. Check the status of your deployed Helm releases.

    helm status mission-control

HA Installation

The following describes how to set up a Mission Control HA cluster with more than one node. For more information about HA, seeSystem Architecture.

Prerequisites

All nodes within the same Mission Control HA installation must be running the same Artifactory version.

For a Mission Control HA cluster to work correctly, you must have at least three nodes in the cluster.


Database

Mission Control HA requires an external PostgreSQL database. Make sure to install it before proceeding to install the first node. There are several ways to setup PostgreSQL for redundancy. Including: HA, Load Balancing and Replication. For moreinformation, see thePostgreSQL documentation

Network

  • All the Mission Control HA components (Mission Control cluster nodes, database server and Elasticsearch) must be within the same fast LAN.

  • All the HA nodes must communicate with each other through dedicated TCP ports.

The following installation methods are supported:

Interactive Script

All install types are supported, including: Docker Compose, Linux Archive, RPM, and Debian.

The installer script provides you an interactive way to install Mission Control and its dependencies. All install types are supported. This installer should be used for Docker Compose.

Installing the First Node

  1. Install the first node. The installation is identical to thesingle node installation.

    Do not start the Mission Control service.

  2. Start the Mission Control service.

    systemd OS
    systemctl start mc.service
    systemv
    service mc start
    Docker Compose
    cd jfrog-mc--compose docker-compose -p mc up -d

    You can install and manage Mission Control as a service in a Linux archive installation. Referstart Mission Control section under Linux Archive Manual Installationfor more details.

    Linux Archive
    mc/app/bin/mc.sh start
  3. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI.

  4. Check the Mission Control log.

    tail - f $ JFROG_HOME / mc/var/log/console.log
    Docker Compose
    docker-compose -p mc logs

Installing Additional Nodes

For a node to join a cluster, the node must have the same database configuration and the master key.

  1. If you installed Search Guard along with Elasticsearch , you must copy the client and node certificates from Elasticsearch's configuration folder in the primary node to all the additional nodes.
    If you want to use the bundled Elasticsearch installation with Mission Control in RPM and Debian installations, copy the client and node certificates from Elasticsearch's configuration folder from the master node to a new directory named as "sg-certs" under the extracted folder on additional node.

    RPM

    Create the folder,sg-certsinside the installer folder,jfrog-mc--rpm.

    Copy localhost.key, localhost.pem, and root-ca.pem from the Elasticsearch source folder,/etc/elasticsearch/,to jfrog-mc--rpm/sg-certs.

    Debian

    Create the folder,sg-certsinside the installer folder,jfrog-mc--deb.

    Copy localhost.key, localhost.pem, and root-ca.pem from the Elasticsearch source folder,/etc/elasticsearch/,to jfrog-mc--deb/sg-certs.

    Docker Compose

    Docker Compose installer uses pre-generated certificates for Search Guard. You do not need to manually copy the client and node certificates.

  2. Install the additional node. The installation is identical to thesingle node installationwith the following differences:
    • Enter Y when the installer prompts whether to join a cluster.
    • Enter the database connection string of the primary node.
    • If you use the bundled PostgreSQL database, enter the database name asmc.
    • Enter the master key of the primary Mission Control node.
      The master key is available at$JFROG_HOME/etc/security/master.key.

  3. Start the additional node.

  4. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI.

  5. Check the Mission Control log.

    Linux
    tail - f $ JFROG_HOME / mc/var/log/console.log
    Docker Compose
    docker-compose -p mc logs

Manual Linux Archive Installation

Installing the First Node

  1. Install the first node. The installation is identical to thesingle node installation.

    Do not start the Mission Control service.

  2. Configure thesystem.yamlfile withthe database and first node configuration details. For example,

    First node system.yaml
    shared: database: type: postgresql driver: org.postgresql.Driver url: jdbc:postgresql:///mission_control?sslmode=disable username:  password:  jfrogUrl:  security: joinKey: 
  3. Start and manage the Mission Control service.

    systemd OS
    systemctl start|stop mc.service
    Systemv OS
    service mc start|stop



  4. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI

  5. Check the Mission Control log.

    Linux
    tail - f $ JFROG_HOME / mc/var/log/console.log

Installing Additional Nodes

For a node to join a cluster, the node must have the same database configuration and the master key. Install all additional nodes using the same steps described above, with the additional steps below:

  1. Configure thesystem.yamlfile for the additional node with master key, database and active node configurations. For example,

    Additional node system.yaml
    shared: database: type: postgresql driver: org.postgresql.Driver url: jdbc:postgresql:///mission_control?sslmode=disable username:  password:  jfrogUrl:  security: joinKey:  # Configure the following property values when Elasticsearch is installed from the bundled Mission Control package. elasticsearch: clusterSetup: "YES" unicastFile: "$JFROG_HOME/mc/data/elasticsearch/config/unicast_hosts.txt"
  2. Copy themaster.keyfrom the first node to the additional node located at $JFROG_HOME/mc/var/etc/security/master.key.
  3. Add the username and password as configured for Elasticsearch on master node on the additional node too. Add it to theShared Configurationssection in美元JFROG_HOME / mc / var / etc /系统。yamlfile.
  4. If you installed Search Guard along with Elasticsearch , copy the client and node certificates from Elasticsearch's config folder from the primary node to a new directory,sg-certs, under the extracted folder on the additional node.

  5. Start the additional node.

  6. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI.
  7. Check the Mission Control log.

    Linux
    tail - f $ JFROG_HOME / mc/var/log/console.log

Helm Installation HA

Important

Currently, it is not possible to connect a JFrog product (e.g., Mission Control) that is within a Kubernetes cluster with another JFrog product (e.g., Artifactory) that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.

Deploying Artifactory for Small, Medium or Large Installations

In the chart directory, includes three values files, one for each installation type–small/medium/large. These values files are recommendations for setting resources requests and limits for your installation.You can find the files in thecorresponding chart directory.

High Availability

For high availability of Mission Control, set thereplicaCount in the values.yaml file to >1(the recommended value is 3).

helm upgrade --install mission-control --namespace mission-control --set replicaCount=3 jfrog/mission-control
  1. Add theChartCenter Helm repositoryto your Helm client.

    helm repo add jfrog https://charts.jfrog.io
  2. Update the repository.

    helm repo update
  3. Initiate installation by providing ajoin key and JFrog url as a parameter to the Mission Control chart installation.

    helm upgrade --install mission-control --set missionControl.joinKey= \ --set missionControl.jfrogUrl= --namespace mission-control jfrog/mission-control

    Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. the key must be named join-key.

    # Create a secret containing the key: kubectl create secret generic my-secret --from-literal=join-key= # Pass the created secret to helm helm upgrade --install mission-control --set missionControl.joinKeySecretName=my-secret --namespace mission-control jfrog/mission-control

    In either case, make sure to pass the same join key on all future calls tohelm installandhelm upgrade! This means always passing--set missionControl.joinKey=. In the second, this means always passing--set missionControl.joinKeySecretName=my-secretand ensuring the contents of the secret remain unchanged.

  4. 定制的产品配置(optional)including database, Java Opts, and filestore.

    Unlike other installations, Helm Chart configurations are made to thevalues.yamland are then applied to thesystem.yaml.

    Follow these steps to apply the configuration changes.

    1. Make the changes tovalues.yaml.
    2. Run the command.

      helm upgrade --installmission-control --namespace mission-control -f values.yaml

    3. Restart Mission Control to apply the changes.
  5. Access Mission Control from your browser at:http:///ui/and go to theDashboardtab in theApplicationmodule in the UI

  6. Check the status of your deployed Helm releases.

    helm status mission-control



Product Configuration

After installing and before running Mission Control, you may set the following configurations.

在哪里to find the system configurations?

You can configure all your system settings using thesystem.yamlfile located in the $JFROG_HOME/mc/var/etc文件夹中。For more information, seeMission Control YAML Configuration.

If you don't have a System YAML file in your folder, copy the template available in the folder and name itsystem.yaml.

For theHelm charts, thesystem.yamlfile is managed in the chart’svalues.yaml.

Artifactory Connection Details

Mission Control requires a working Artifactory server and a suitable license. The Mission Control connection to Artifactory requires 2 parameters:

  • jfrogUrl- URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example:http://jfrog.acme.comorhttp://10.20.30.40:8082
    Set it in theShared Configurationssection of the$JFROG_HOME/mc/etc/system.yamlfile.
  • join.key- This is the "secret" key required by Artifactory for registering and authenticating the Mission Control server.
    You can fetch the ArtifactoryjoinKey(join Key) from the JPD UI in theAdministration module | User Management | Settings | Join Key.
    Set thejoin.keyused by your Artifactory server in theShared Configurationssection of the$JFROG_HOME/mc/etc/system.yamlfile.

Changing PostgreSQL Database Credentials

Mission Control comes bundled with a PostgreSQL Database out-of-the-box,which comes pre-configured with default credentials.

These commands are indicative and assume some familiarity with PostgreSQL. Please do not copy and paste them. For docker-compose, you will need to ssh into the PostgreSQL container before you run them

To change the default credentials:

PostgreSQL
#1. Change password for mission control user # Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt $ psql -d mission_control -U jfmc -W # Securely change the password for user "mission_control". Enter and then retype the password at the prompt. \password jfmc # Verify the update was successful by logging in with the new credentials $ psql -d mission_control -U jfmc -W #2. Change password for scheduler user # Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt $ psql -d mission_control -U jfisc -W # Securely change the password for user "mission_control". Enter and then retype the password at the prompt. \password jfisc # Verify the update was successful by logging in with the new credentials $ psql -d mission_control -U jfisc -W #3. Change password for insight server user # Access PostgreSQL as the jfmc user adding the optional -W flag to invoke the password prompt $ psql -d mission_control -U jfisv -W # Securely change the password for user "mission_control". Enter and then retype the password at the prompt. \password jfisv # Verify the update was successful by logging in with the new credentials $ psql -d mission_control -U jfisv -W

Changing Elasticsearch Credentials

Search Guard tool is used to manage authentication. To change password for the default user, Search Guard accepts a hash password to be provided in the configuration.

  1. Obtain the username used to access Elasticsearch from美元JFROG_HOME / mc / var / etc /系统。yaml el可用asticsearch.username
  2. Generate the hash password by providing the password(in text format) as input

    $ELASTICSEARCH_HOME/plugins/search-guard-7/tools/hash.sh -p 
  3. The output from the previous step should be updated in the configurationfor the default user

    Other flavours
    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_internal_users.yml #Scroll in the file to find an entry for the username of the default user #Update the value for "hash" with the hash content obtained from previous step : hash: 
  4. Run the command to initialise Search Guard

Add Certificates when Connecting to SSL Enabled Elasticsearch

Other flavours
cd $JFROG_HOME/mc/var/etc/security/keys/trusted #Copy the certificates to this location and restart MC services

Set your PostgreSQL and Elasticsearch connection details in theShared Configurationssection of the美元JFROG_HOME / mc / var / etc /系统。yamlfile.

Load a Custom Certificate to Elasticsearch Search Guard

If you prefer to use the custom certificates when Search Guard enabled with tls in Elasticsearch, you can use thesearch-guard-tlstoolto generate Search Guard certificates.

The是一个工具来生成搜索保安证书vailable in$JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-.tar.gz. For more information about generating certificates, seeSearch Guard TLS Tool.

  1. 运行此工具生成证书。

    tar -xvf $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-.tar.gz cp $JFROG_HOME/app/third-party/elasticsearch/config/tlsconfig.yml $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-/config cd $JFROG_HOME/app/third-party/elasticsearch/search-guard-tlstool-/tools ./sgtlstool.sh -c ../config/tlsconfig.yml -ca -crt # folder named "out" will be created with all the required certificates, cd out
  2. Copy the generated certificates [[localhost.key, localhost.pem, root-ca.pem, sgadmin.key, sgadmin.pem]] to the target location based on the installer type.

    Native
    cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem /etc/elasticsearch/certs/
    Docker Compose
    cp localhost.key localhost.pem root-ca.pem sgadmin.key sgadmin.pem $JFROG_HOME/mc/var/data/elasticsearch/certs

Configuring a Custom Elasticsearch Role

The Search Guard tool is used to manage authentication. By default, an admin user is required to authenticate Elasticsearch. As an alternative to this, a new user can be configured to authenticate Elasticsearch by assigning a custom role with permissions for the application to work.

  1. Add the following snippet to define a new role with custom permissions:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_roles.yml #Add the following snippet to define a new role with custom permissions : cluster_permissions: - cluster:monitor/health - cluster:monitor/main - cluster:monitor/state - "indices:admin/template/get" - "indices:admin/template/delete" - "indices:admin/template/put" - "indices:admin/aliases" - "indices:admin/create" index_permissions: - index_patterns: - "active_*" allowed_actions: - "indices:monitor/health" - "indices:monitor/stats" - "indices:monitor/settings/get" - "indices:admin/aliases/get" - "indices:admin/get" - "indices:admin/aliases" - "indices:admin/create" - "indices:admin/delete" - "indices:admin/rollover" - SGS_CRUD


  2. Add the following snippet to add a new user:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_internal_users.yml # Add the following snippet to add a new user : hash:  backend_roles: - "" //role_name defined in previous step description: ""


    1. Run the following command to generate a hash password:

      $ELASTICSEARCH_HOME/plugins/search-guard-7/tools/hash.sh -p 
  3. Add the following snippet to map the new username to the role defined in the previous step:

    vi $ELASTICSEARCH_HOME/plugins/search-guard-7/sgconfig/sg_roles.yml/sg_roles_mapping.yml # Add the following snippet to map the new username to the role defined in the previous step : users: - ""
  4. Initialize Search Guard to upload the above changes made in the configuration.

    export JAVA_HOME=/mc/app/third-party/java cd $ELASTICSEARCH_HOME/plugins/search-guard-7/tools bash ../tools/sgadmin.sh -p 9300 -cacert root-ca.pem -cert sgadmin.pem -key sgadmin.key -nhnv -icl -cd ../sgconfig/


  5. Set the new credentials in$JFROG_HOME/mc/etc/system.yaml文件:

    shared: elasticsearch: username:  password: 
  6. Restart Mission Control services.

Installing PostgreSQL


Passwords for Postgres with Special Characters

Do not use a password for PostgreSQL that has special characters: Mission Control may not work if you configure a password that has special characters, such as~ = # @ $ /.

RPM

  1. Install PostgreSQL.

    # Run the following commands from the extracted jfrog-mc--rpm directory. # Note : Use postgreSQL rpms with el6 when installing on Centos 6 and RHEL 6 and use postgresql13-13.2-1 packages # Note : Use postgreSQL rpms with el8 when installing on Centos 8 and RHEL 8 mkdir -p /var/opt/postgres/data rpm -ivh --replacepkgs ./third-party/postgresql/libicu-50.2-3.el7.x86_64.rpm (only AWS instance) rpm -ivh --replacepkgs ./third-party/postgresql/postgresql13-libs-13.2-1PGDG.rhel7.x86_64.rpm rpm -ivh --replacepkgs ./third-party/postgresql/postgresql13-13.2-1PGDG.rhel7.x86_64.rpm rpm -ivh --replacepkgs ./third-party/postgresql/postgresql13-server-13.2-1PGDG.rhel7.x86_64.rpm chown -R postgres:postgres /var/opt/postgres export PGDATA="/var/opt/postgres/data" export PGSETUP_INITDB_OPTIONS="-D /var/opt/postgres/data" # For centos 7&8 / rhel 7&8 sed -i "s~^Environment=PGDATA=.*~Environment=PGDATA=/var/opt/postgres/data~" /lib/systemd/system/postgresql-13.service systemctl daemon-reload /usr/pgsql-13/bin/postgresql-13-setup initdb # For centos 6 / rhel 6 sed -i "s~^PGDATA=.*~PGDATA=/var/opt/postgres/data~" /etc/init.d/postgresql-13 service postgresql-13 initdb Replace "ident" and "peer" with "trust" in postgres hba configuration files ie /var/opt/postgres/data/pg_hba.conf
  2. Configure PostgreSQL to allow external IP connections.

  3. By default PostgreSQL will only allow localhost clients communications. To enable different IPs to communicate with the database you will need toconfigure thepg_hba.conffile.

    File location according to installation type

    • Docker-compose:$JFROG_HOME/mc/var/data/postgres/data
    • Native installations:/var/opt/postgres/data

    To grant all IPs access you may add the below, under the IPv4 local connections section.

    host all all 0.0.0.0/0 trust

    Add the following line to/var/opt/postgres/data/postgresql.conf.

    listen_addresses='*' port=5432
  4. Start PostgreSQL.

    systemctl start postgresql-13.service or service postgresql-13 start
  5. Setup the database anduser.

    ## run the script to seed the tables and schemas needed by Mission Control cp -f ./third-party/postgresql/createPostgresUsers.sh /tmp source /etc/locale.conf cd /tmp && su postgres -c "POSTGRES_PATH=/usr/pgsql-13/bin PGPASSWORD=postgres DB_PASSWORD=password bash /tmp/createPostgresUsers.sh"

Debian

Prerequisites

It is recommended to ensure yourapt-getlibraries are up-to-date, using the following commands.

Install any missing dependancies
apt-get update apt-get install -f -y apt-get update
# Create the file repository configuration to pull postgresql dependencies cp -f /etc/apt/sources.list /etc/apt/sources.list.origfile sh -c 'echo "deb http://ftp.de.debian.org/debian/ $(lsb_release -cs) main non-free contrib" >> /etc/apt/sources.list' sh -c 'echo "deb-src http://ftp.de.debian.org/debian/ $(lsb_release -cs) main non-free contrib" >> /etc/apt/sources.list' cp -f /etc/apt/sources.list.d/pgdg.list /etc/apt/sources.list.d/pgdg.list.origfile sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" >> /etc/apt/sources.list.d/pgdg.list' sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" >> /etc/apt/sources.list.d/pgdg.list' wget --no-check-certificate --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
Install Steps
  1. Install PostgreSQL.
    Run the following commands from the extracted jfrog-mc--deb directory.

    mkdir -p /var/opt/postgres/data
    Ubuntu 16.04 (xenial)
    第三方/ postgresql / postgresql-13_13 dpkg -我。/。2-1.pgdg16.04+1_amd64.deb
    Ubuntu 18.04 (bionic)
    第三方/ postgresql / postgresql-13_13 dpkg -我。/。2-1.pgdg18.04+1_amd64.deb
    Ubuntu 20.04 (focal)
    第三方/ postgresql / postgresql-13_13 dpkg -我。/。2-1.pgdg20.04+1_amd64.deb
    Debian 8 (jessie)
    ## Before installing Postgres dependencies mv /etc/apt/sources.list.d/backports.list /etc/apt >/dev/null apt-get update dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg80+1_amd64.deb # After installing Postgres dependencies mv /etc/apt/backports.list /etc/apt/sources.list.d/backports.list >/dev/null apt-get update
    Debian 9 (stretch)
    第三方/ postgresql / postgresql-13_13 dpkg -我。/。2-1.pgdg90+1_amd64.deb
    Debian 10 (buster)
    apt update -y apt-get install wget sudo -y apt-get install -y gnupg gnupg1 gnupg2 dpkg -i ./third-party/postgresql/postgresql-13_13.2-1.pgdg100+1_amd64.deb
  2. Stop the Xray service.

    systemctl stop postgresql.service
  3. Change permissions for the postgres folder.

    chown -R postgres:postgres /var/opt/postgres sed -i "s~^data_directory =.*~data_directory = '/var/opt/postgres/data'~" "/etc/postgresql/13/main/postgresql.conf" sed -i "s~^hba_file =.*~hba_file = '/var/opt/postgres/data/pg_hba.conf'~" "/etc/postgresql/13/main/postgresql.conf" sed -i "s~^ident_file =.*~ident_file = '/var/opt/postgres/data/pg_ident.conf'~" "/etc/postgresql/13/main/postgresql.conf" su postgres -c "/usr/lib/postgresql/13/bin/initdb --pgdata=/var/opt/postgres/data"
  4. Configure PostgreSQL to allow external IP connections.

  5. By default PostgreSQL will only allow localhost clients communications. To enable different IPs to communicate with the database you will need toconfigure thepg_hba.conffile.

    File Location According to Installation Type

    • Docker-compose:$JFROG_HOME/mc/var/data/postgres/data
    • Native installations:/var/opt/postgres/data

    To grant all IPs access you may add the below, under the IPv4 local connections section:

    host all all 0.0.0.0/0 trust

    Add the following line to /etc/postgresql/13/main/postgresql.conf

    listen_addresses='*'
  6. Start PostgreSQL

    systemctl start postgresql.service or service postgresql start
  7. Set up the database anduser.

    ## run the script to seed the tables and schemas needed by Mission Control cp -f ./third-party/postgresql/createPostgresUsers.sh /tmp source /etc/default/locale cd /tmp && su postgres -c "POSTGRES_PATH=/usr/lib/postgresql/13/bin PGPASSWORD=postgres DB_PASSWORD=password bash /tmp/createPostgresUsers.sh"
  8. Put back the original pgdg.list.

    mv /etc/apt/sources.list.d/pgdg.list /etc/apt/sources.list.d/pgdg.list.tmp && cp -f /etc/apt/sources.list.d/pgdg.list.origfile /etc/apt/sources.list.d/pgdg.list
  9. Remove backup files.

    rm -f /etc/apt/sources.list.d/pgdg.list.tmp rm -f /etc/apt/sources.list.d/pgdg.list.origfile
  10. Put back the original sources.list.

    mv /etc/apt/sources.list /etc/apt/sources.list.tmp && cp -f /etc/apt/sources.list.origfile /etc/apt/sources.list
  11. Remove the backup files.

    rm -f /etc/apt/sources.list.tmp && rm -f /etc/apt/sources.list.origfile

Linux Archive

Postgres binaries are no longer bundled with linux archive installer for Mission Control. Remember to install Postgres manually.

# Create the psql database (the script "mc/app/third-party/postgresql/createPostgresUsers.sh" , responsible for seeding Postgres assumes this database exists) /psql template1 : CREATE DATABASE ; : \q ## run the script to seed the tables and schemas needed by Mission Control POSTGRES_PATH= mc/app/third-party/postgresql/createPostgresUsers.sh


Setting up Your PostgreSQL Databases, Users and Schemas

Database and schema names can only be changed for a new installation. Changing the names during an upgrade will result in the loss of existing data.

Helm Users

Create a single user with permission to all schemas. Use this user's credentials during your Helm installationon this page.

  1. 作为管理员登录到PostgreSQL数据库execute the following commands.

    PostgreSQL Database, Schema and User Creation
    CREATE DATABASE mission_control WITH ENCODING='UTF8' TABLESPACE=pg_default; # Exit from current login \q # Login to $DB_NAME database using admin user (by default its postgres) psql -U postgres mission_control CREATE USER jfmc WITH PASSWORD 'password'; GRANT ALL ON DATABASE mission_control TO jfmc; CREATE SCHEMA IF NOT EXISTS jfmc_server AUTHORIZATION jfmc; GRANT ALL ON SCHEMA jfmc_server TO jfmc; CREATE SCHEMA IF NOT EXISTS insight_server AUTHORIZATION jfmc; GRANT ALL ON SCHEMA insight_server TO jfmc; CREATE SCHEMA IF NOT EXISTS insight_scheduler AUTHORIZATION jfmc; GRANT ALL ON SCHEMA insight_scheduler TO jfmc;
  2. Configure thesystem.yamlfile with the database configuration details according to the information above. For example.

    shared: database: type: postgresql driver: org.postgresql.Driver url: jdbc:postgresql://localhost:5432/mission_control username: jfmc password: password

For Advanced Users

Manual Docker Compose Installation

  1. Extract the contents of the compressed archive and go to the extracted folder.

    tar -xvf jfrog-mc--compose.tar.gz

    .env file included within the Docker-Compose archive

    This .env file is used bydocker-composeand is updated during installations and upgrades.

    Notice that some operating systems do not display dot files by default. If you've made any changes to the file, remember to backup before an upgrade.

  2. Create the following folder structure under$JFROG_HOME/mc.

    -- [1050 1050 ] var -- [1050 1050 ] data -- [1000 1000 ] data/elasticsearch -- [999 999 ] postgres -- [1050 1050 ] etc
  3. Copy the appropriate docker-compose templates from the templates folder to the extracted folder. Rename it asdocker-compose.yaml.

    The commands below assume you are using the template:docker-compose-postgres.yaml.

    Requirement Template
    Mission control with Elasticsearch docker-compose.yaml
    PostgreSQL docker-compose-postgres.yaml

    Docker for Mac

    When you use Docker Compose in Mac,/etc/localtimemight not work as expected since it might not be a shared location in the docker-for-mac settings.

    You can remove the following line from the selecteddocker-compose.yamlfile to avoid installation issues.

    - /etc/localtime:/etc/localtime:ro



  4. Update the.envfile

    ## The Installation directory for Mission Control. IF not entered, the script will prompt you for this input. Default [$HOME/.jfrog/mc] ROOT_DATA_DIR= ## Public IP of this machine HOST_IP= ## Configuration on the first bootstrap of the cluster. Set this only for the first node. ES_MASTER_NODE_SETTINGS="cluster.initial_master_nodes="
  5. 定制的产品配置.
    1. Set the Artifactory connection details.
    2. Customize the PostgreSQL Database connection details. (optional)
    3. Set any additional configurations (for example: ports, node id) usingMission Control System YAML.

      Verify that the host's ID and IP are added to thesystem.yaml. This is important to ensure that other products and Platform Deployments can reach this instance.

  6. For Elasticsearch to work correctly, increase the map count. For additional information, seeElasticsearch documentation.

  7. Create the necessary tables and users using the script: "createPostgresUsers.sh".
    • Start the PostgreSQL container.

      docker-compose -p mc-postgres -f docker-compose-postgres.yaml up -d
    • Copy the script into the PostgreSQL container.

      docker cp ./third-party/postgresql/createPostgresUsers.sh mc_postgres:/
    • Exec into the container and execute the script. This will create the database tables and users.

      PostgreSQL 9.x
      docker exec -t mc_postgres bash -c "chmod +x /createPostgresUsers.sh && gosu postgres /createPostgresUsers.sh"
      PostgreSQL 10.x/12.x
      docker exec -t mc_postgres bash -c "export DB_PASSWORD=password1 && chmod +x /createPostgresUsers.sh && su-exec postgres /createPostgresUsers.sh"
  8. Run the following commands.

    mkdir -p ${ROOT_DATA_DIR}/var/data/elasticsearch/sgconfig mkdir -p ${ROOT_DATA_DIR}/var/data/elasticsearch/config touch -p ${ROOT_DATA_DIR}/var/data/elasticsearch/config/unicast_hosts.txt chown -R 1000:1000 ${ROOT_DATA_DIR}/var/data/elasticsearch chmod 777 ${ROOT_DATA_DIR}/var/data/elasticsearch/config/unicast_hosts.txt
  9. Start Mission Control using docker-compose commands.

    docker-compose -p mc logs docker-compose -p mc ps docker-compose -p mc up -d docker-compose -p mc down
  10. Access Mission Control from your browser at:http://SERVER_HOSTNAME/ui/. For example, on your local machine:http://localhost/ui/.

  11. Check the Mission Control log.

    docker-compose -p mc logs

    Configuring the Log Rotation of the Console Log

    Theconsole.logfile can grow quickly since all services write to it. The installation scripts add a cron job to log rotate theconsole.logfile every hour.

    This isnotdone for manual Docker Compose installations. Learn more on how toconfigure the log rotation.

  • No labels
Copyright © 2023 JFrog Ltd.