Cloud customer?
Start for Free>
Upgrade in MyJFrog >
What's New in Cloud >





Overview

This page provides tips to solve common problems that users have encountered.

Page Contents


JFrog Platform

The following structure is common across all JFrog products.

Folder/File Name

Description

bin
Contains helper scripts for installer.
third-party
Contains third party software.
Product specific bundles (for non Docker Compose installers)
templates
Docker Compose templates (only for Docker Compose installers).
install.sh
Main installer script (for non Docker Compose installers).
config.sh
Main configure script (only for Docker Compose installers).
readme.md
Read me file providing the package details.

Depending on the installer type:

  • RPM / Debian Installers: Set the data directory path in the variableJF_PRODUCT_VARto the customized data folder and start the services. Set the system environment variable to point to a custom location in your system's environment variables files. SeeUbuntu System environment variables.

  • Archive Installer: By default, the data directory is set to theunzip-location/var. You can symlink this directory to any folder you want.

  • Docker Compose Installer: Set theJF_ROOT_DATA_DIRvariable in the.envfile that comes packaged with the installer.

It is recommended to run a health check on the specific JFrog product Router node, which is connected to all the node's microservices. This will provide you with the latest health information for the node.

For example, Artifactory'sHealth Check REST API.

GET /router/api/v1/system/health

Each microservice has its own service log. However, it is recommended to start your debugging process by usingtheconsole.log, which is a collection of all service logs of all products in a node.Learn More >

JFrog Artifactory,Insightand Distribution are bundled with Java 11. To customize the Java run time, configure theshared.extraJavaOptsin thesystem.yaml.

The default ports used by each JFrog Product can be modified in the Productsystem.yamlfile.
例如,设置Artifactory运行在不同ent port (and not on the default 8081 port), perform the following:

  1. Open the Artifactory$JFROG_HOME/artifactory/var/etc/system.yamlfile.
  2. Add or edit the new port key under the artifactory section.

    artifactory: port: 

system.full-template.yaml

Examples for all the different configuration values, including application ports are available in the$JFROG_HOME//var/etc/system.full-template.yamlfile.


Access Service

Symptoms

During startup, Artifactory fails to start and an error is thrown:

java.lang.IllegalStateException: Provided private key and latest private key fingerprints mismatch.
Cause

Artifactory tries to validate and compare access keys' fingerprint that reside on Artifactory's database and the local file system. If the keys do not match, the exception above will be thrown along with the mismatching fingerprint IDs.
This could occur during an attempted upgrade/installation of Artifactory.

Resolution

Follow the steps below to make sure that all instances in your circle of trust have the same private key and root certificate:

Key rotation will invalidate any issued access tokens

The procedure below will create new key pairs which in turn will invalidate any existing Access Tokens.

    1. Create an empty marker file calledbootstrap.reset_root_keysunder$ARTIFACTORY_HOME/access/etc/
    2. Restart Artifactory.
    3. Verify that the$ARTIFACTORY_HOME/logs/artifactory.logor$ARTIFACTORY_HOME/access/logs/access.logfile shows the following entry:
    **************************************************************** *** Skipping verification of of the root private fingerprint *** **************************************************************** *** Private key fingerprint will be overwritten **************** ****************************************************************



SSL / TLS Issues

Here are some Java options to help troubleshoot SSL/TLS issues in Artifactory:

  • Djavax.net.debug=ssl:handshake
  • Djava.security.debug=certpath,provider

Reverse Proxy Issues

The error "httputil: ReverseProxy read error during body copy errors" shows up in the logs (console, application logs). The error originates from treafik, and thereis no actual affect observed related to these errors.



Access Tokens

Symptoms Authentication with an access token doesn't work with an error that says "Token validation failed".
Cause The implementation of access tokens was changed in Artifactory 5.4. The change is backwards compatible, so tokens created with earlier versions of Artifactory can be authenticated in the new version, however the reverse is not true. Tokens created in versions 5.4 or later cannot be authenticated by versions earlier than 5.4.
Resolution Either upgrade your older Artifactory instances, or make sure you only create access tokens with the older instances

High Availability

Xray

To adjust the active node name and IP on the secondary node after a HA installation, it is recommended to re-run the installation wrapper script. Alternatively, manually modify the following files:

RPM/Debian Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml
Docker Compose Installation
  1. $JFROG_HOME/xray/var/etc/system.yaml
  2. /.env
  3. $JFROG_HOME/xray/app/third-party/rabbitmq/rabbitmq.conf

Insight

Installation

Cause
磁盘存储Elasticsearch数据已经超过ed 95 % storage
Resolution

1. Stop the services.

2. Clear space on disk used to store Elasticsearch data.

3. Start the services.

4. Change Elasticsearch indices setting to RW (read-write).

curl -u: -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Default username and password for internal Elasticsearch is admin.

Debug Log Configuration

Cause
From version 4.x the logback.xml has a different way to enable debug logging.
Resolution

To configure the Insight log for debug logging:
In the $JFROG_HOME/var/opt/jfrog/mc/etc/insight/logback.xml file, modify the logger name line as follows:

org.jfrog.insight" level="DEBUG"/>

Changes made to the logging configuration are reloaded within several seconds without requiring a restart.

Insight Trends NotDisplaying

Cause
Incorrect Elasticsearch indices used.
Resolution
  1. Log in to the Insight container.

  2. DisableAUTO_CREATE.

    curl -H 'Content-Type:application/json' -XPUT localhost:8082/elasticsearch/_cluster/settings -d'{"persistent":{"action.auto_create_index":"false"}}' -uadmin:admin
  3. Delete index in Elasticsearch by issuing:

    curl -XDELETE http://localhost:8082/elasticsearch/active_request_data -uadmin:admin
  4. Delete index in Elasticsearch by issuing:

    curl -XDELETE http://localhost:8082/elasticsearch/active_metrics_data -uadmin:admin
  5. Delete template.

    curl -X DELETE localhost:8082/elasticsearch/_template/request_logs_template_7 -uadmin:admin
  6. Delete template.

    curl -X DELETE localhost:8082/elasticsearch/_template/metrics_insight_template_7 -uadmin:admin
  7. Stop Insight.

  8. Start Insight.




Pipelines

Installation

Symptoms

When running Pipelines install, you receive the following message:

# Setting platform config ################################################## Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cause

The Docker service is not running. This can be verified by runningdocker info.

Resolution

Restart the Docker service:

$ systemctl stop docker $ systemctl start docker OR $ systemctl restart docker OR $ service docker restart OR $ service docker stop $ service docker start

Node initialization

Symptoms
check_win_containers_enabled : Windows Containers must be enabled. Please install the feature, restart this machine and run this script again.
Cause

The node does not have containers enabled.

Resolution

Enable containers for Windows. Run the following in PowerShell with elevated privileges and then restart the machine.

> Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All > Enable-WindowsOptionalFeature -Online -FeatureName Containers -All
Symptoms

When initializing a new node, an error in the output states thatnodeis not found. Initialization then fails.

Cause

NodeJS is installed, but misconfigured. The error most likely occurred because it was not found in the path.

Resolution

Uninstall NodeJS and allow the build node initialization to reinstall.

If NodeJS was originally installed as part of node initialization, the following commands should work.

On Ubuntu, CentOS, or RHEL $ sudo rm -rf /usr/local/bin/node $ sudo rm -rf /usr/local/lib/node_modules/npm/ On Windows > choco uninstall nodejs

Pipelines Error Messages

This section lists commonly-encountered Pipelines error messages, possible causes, and some suggestions for resolving the errors.If you have trouble fixing any of these errors, submit a request to Support for further investigation.

Error: All resource versions are not fetched

Error
reqKick|executeStep|step|prepData|jFrogPipelinesSessionId:28be9c21-4ad6-4e3d-9411-7b9988535fd1|_getResourceVersions, All resource versions are not fetched. Requested resource versions: 16; received resource versions: []
Cause

After the run was triggered, but before it started running, one or more resources in the pipeline were reset.Hence, while fetching the resources associated with the run, the resource version was returned as an empty array.

Resolution

Re-run the pipeline.

When a resource is reset, it wipes out the resource version history and resets it to a single version, which is now considered the latest. This version is used for the new run.

Error: fatal: reference is not a tree

Error
fatal: reference is not a tree: 679e2fc3c2590f7dbaf64534a325ac60b4dc8689
Cause

This could be a result of usinggit push --forceorgit rebase, which deletes the commit and causes the pipeline to not run.

Resolution

Either:

  • Reset the resource and then trigger the pipeline again. Note that if there are several GitRepo resources in the pipeline, this needs to be done for all of them.

or

  • 把另一个提交,这样所有的资源2022世界杯阿根廷预选赛赛程updated automatically.

Error: Failed to create pvc for node

Error
Failed to create pvc for node
Cause

Either the Kubernetes configuration does not have access to create aPersistent Volume Claim (PVC)resource or Pipelines cannot connect to the provided Kubernetes host server.

Resolution

Review the Kubernetes configurationsand verify that theKube Configprovided while creating theKubernetes Integrationhasadequatepermissions.

Error: SCM provider credentials do not have enough permissions

Error
The credentials provided for the integration "" do not have enough permissions. Ensure that the credentials exist and have the correct permissions for the provider: github.
Cause

The credentials (username and/or token) provided while creating the integration are either incorrect or insufficient.

Resolution

Ensure that the credentials provided for the SCM provider are correct and have sufficient permissions.

Error: SCM provider URL is invalid

Error
The URL provided for the integration “” is invalid. Provide a valid URL for the SCM provider and try again.
Cause

The SCM URL provided while creating the integration is incorrect.

Resolution

Ensure that the URL provided for the SCM provider is correct.

Error: SCM provider repository path is invalid

Error
The repository path "" is either invalid or does not exist. Ensure that the repository path exists and has the correct permissions for the integration: .
Cause

The repository path provided for the SCM provider is either incorrect or does not exist.

Resolution

Ensure that the repository name provided for the SCM provider is correct.

Error: Step type cannot be updated

Error
type cannot be updated from  to  in step 

Example:build/ci/pipelines.yml: type cannot be updated from Bash to Matrix in step e2e_local_tests

Cause

After the pipeline performs a sync, a step's type should not be modified, as it can cause pipeline sync errors.

Resolution

Though not recommended, if you do want to change a step's type, perform the following steps:

  1. Change the step's name and type
  2. Wait for pipeline sync
  3. After the sync completes, change the step's name back to the old name

Error: Ubuntu 16.04 not supported

Error
Ubuntu_16.04 has reached end of support. Please upgrade to a higher version.
Cause

Ubuntu Linux 16.04 LTS reached the end of its five-year LTS window on April 30th 2021 and is no longer supported by its vendor. Due to this,Pipelines no longer supports your existing Ubuntu 16 node pools.

Resolution
  • Dynamic Node Pools: Existing Ubuntu 16 dynamic pools will be automatically migrated to Ubuntu 18 provided those were created with the default build plane images. If you have anycustom Ubuntu 16 node pools, they must be manually migrated to Ubuntu 18 or higher.
  • Static Node Pools: Upgrade all your existing Ubuntu 16 static node pools to Ubuntu 18 or higher.

For information about the supported Ubuntu versions, see theSystem Requirements Matrix.

Error: CentOS 8.0 not supported

Error
CentOS 8 has reached end of support. Please change the OS to another supported version. For the list of supported versions, see System Requirements.
Cause

CentOS 8.x reached end-of-life is on December, 2021 and is no longer supported by its vendor. Due to this,Pipelines no longer supports your existing CentOS 8 node pools.

Resolution
  • Dynamic Node Pools: Existing CentOS 8.x dynamic pools will be automatically migrated to CentOS 7.0 provided those were created with the default build plane images. If you have anycustom CentOS 8.x node pools, they must be manually migrated to either CentOS 7.0 or a different OS.
  • Static Node Pools: After upgrading to the next major release of Pipelines, either remove any CentOS 8.x node pools or change the machine image to a different OS.

For the list of supported OS versions, seeSystem Requirements.

Error: postHook returned error 422

Error
未能同步钩犯错:Webhook创建固定资产投资led for path: userName/repoName and integration: myGithub with err: postHook returned error 422 for userName/repoName
Cause

This is usually the result of too many webhooks. GitHub allows 20 webhooks per repository.

Resolution

In GitHub, go to theSettings|Webhookstab for the relevant repository and delete all the failed webhooks.

Error: Connection was not successful

Error
Connection was not successful
Cause

One of the reasons for this message to appear is whenthe Artifactory/Distribution URL provided for the integration is incorrect.

Resolution

Verify that your Artifactory/Distribution URL provided for the integration is correct. If you find that it is incorrect, update the URL and use theTest Connectionbutton to verify, and then save.

  • No labels
Copyright © 2022 JFrog Ltd.