支持开放标准
Artifactory Metrics
The得到开放的指标for ArtifactoryREST API returns the following metrics inOpen Metrics format.
Metric |
Description |
---|---|
|
Total Disk used by the application (Home directory) |
|
Total Disk free |
jfrt_artifacts_gc_duration_seconds |
Time taken by a GC run |
jfrt_artifacts_gc_binaries_total |
Number of binaries removed by a GC run |
jfrt_artifacts_gc_size_cleaned_bytes |
Space reclaimed by a GC run |
jfrt_artifacts_gc_current_size_bytes |
Space occupied by Binaries after a GC run (Only for FULL GC runs) |
jfrt_runtime_heap_freememory_bytes |
Available free memory for JVM |
jfrt_runtime_heap_maxmemory_bytes |
Maximum memory configured for JVM |
jfrt_runtime_heap_totalmemory_bytes |
Total memory configured for JVM memory |
jfrt_runtime_heap_processors_total |
Total number of processors for JVM memory |
jfrt_db_connections_active_total |
Total number of active total DB connections |
jfrt_db_connections_idle_total |
Total number of idle DB connections |
jfrt_db_connections_max_active_total |
Total number of maximum DB connections |
jfrt_db_connections_min_idle_total |
Total number of min idle DB connections |
jfrt_http_connections_available_total |
Total number of available outbound HTTP connections |
jfrt_http_connections_leased_total |
Total number of available leased HTTP connections |
jfrt_http_connections_pending_total |
Total number of available pending HTTP connections |
jfrt_http_connections_max_total |
Total number of maximum HTTP connections |
|
Slow queries duration in seconds |
jfrt_slow_queries_count_total |
Total number of slow queries |
jfrt_storage_current_total_size_bytes |
Total size of current storage in bytes. |
jfrt_projects_active_total |
Total number of active project |
jfrt_artifacts_gc_next_run_seconds |
Number of seconds for the next run of artifacts garbage collection. |
jfrt_http_connections_*
metrics collects outbound HTTP connections for repositories sorted by available pool count. If you want to collect this information for more repositories, you can set the value in the artifactory.system.properties file (available at$JFROG_HOME/var/etc/artifactory/
) using the flag,artifactory.httpconnections.metrics.max.total.repositories
. The default and recommended value is 10. You can set the value to any integer.
Xray Metrics
TheXray MetricsREST API returns the following metrics:
Metric | Description |
---|---|
jfxr_db_sync_started_before_seconds |
Seconds that passed since the last Xray DB sync started running |
|
DB sync total running time |
|
Seconds that passed since completed persisting new updates to the database |
|
Seconds that passed since DB sync completed sending all impact analysis messages |
jfxr_data_artifacts_total |
Total number of Xray scanned artifacts by package type Note: Package type is a label |
jfxr_data_components_total |
Total number of Xray scanned components by package type Note: Package type is a label |
|
Seconds that passed since Xray server has started on the particular node |
app_disk_used_bytes |
Disk usage in bytes. |
app_disk_free_bytes | Free space on disk in bytes |
app_io_counters_read_bytes | Number of bytes read by the application |
app_io_counters_write_bytes | Number of bytes written by the application |
app_self_metrics_calc_seconds | Number of seconds for calculation in metrics |
app_self_metrics_total | Total number of self metrics |
cleanup_job_data_deleted_artifacts_in_last_batch_total | Number of artifacts deleted in the last batch of the cleanup job |
cleanup_job_data_processed_artifacts_total | Number of artifacts processed iby the cleanup job |
cleanup_job_data_processed_artifacts_in_last_batch_total | Number of artifacts processed in the last batch of the cleanup job |
cleanup_job_data_start_time_seconds | Start time of the last cleanup job |
cleanup_job_data_end_time_seconds | End time of the last cleanup job |
cleanup_job_data_time_taken_by_last_job_seconds | Time taken to complete the last cleanup job |
cleanup_job_data_deleted_artifacts_total | Total number of artifacts deleted by the cleanup job |
db_connection_pool_in_use_total | Number of connections in use in the DB connection pool |
db_connection_pool_idle_total | Number of connections that are idle in the DB connection pool |
db_connection_pool_max_open_total | Number of connections in use in the DB connection pool. |
go_memstats_heap_in_use_bytes | Memory (in bytes) in use by the Go heap |
go_memstats_heap_allocated_bytes | Memory (in bytes) allocated to the Go heap |
go_memstats_heap_idle_bytes | Idle memory (in bytes) allocated to the Go heap |
go_memstats_heap_objects_total | Total number of objects in the Go heap |
go_memstats_heap_reserved_bytes | Memory (in bytes) reserved for the Go heap |
go_memstats_gc_cpu_fraction_ratio | GC-CPU ratio of the Go heap |
go_routines_total | Total number of Go routines |
jfxr_jira_no_of_integrations_total | Total number of Jira integrations |
jfxr_jira_no_of_profiles_total | Total number of Jira profiles |
jfxr_jira_no_of_tickets_created_in_last_one_hour_total | Total number of Jira tickets created in the last one hour |
jfxr_jira_last_ticket_creation_time_seconds | Time at which the last Jira ticket was created |
jfxr_jira_no_of_errors_in_last_hour_total | Number of Jira errors in the last hour |
jfxr_jira_last_error_time_seconds | Time at which the last error occurred in Jira |
queue_messages_total | 总数的消息在队列中 |
Logs
The artifactory_metrics.log
will contain system metrics such as:
- Total disk space used
- Total disk space free
- Time CPU is used by the process
- JVM available memory
- JVM number of processors
- DB number of active, idle, max and min connections
- HTTP number of available, leased, pending and max connections
- Xray DB sync running time
- Xray total number of scanned artifacts and components
- Xray server start time on a node
Theartifactory_metrics_events.log
will contain deduplicated metrics related to an event such as a GC run.
PDN Metrics
Metrics Log Files
The following are the two metric log files created for PDN:
- PDN Server:
$JF_PRODUCT_HOME/var/log/tracker-metrics.log
- PDN Node:
$JF_PRODUCT_HOME/var/log/distribution-node-metrics.log
ThePDN Server MetricsREST API returns the following metrics inOpen Metrics format.
Metric |
Description |
---|---|
|
所使用的tes for app home directory disk device |
|
Free bytes for app home directory disk device |
app_io_counters_error |
Error in the app io counter |
app_self_metrics_calc_seconds |
Total time to collect all metrics |
app_self_metrics_total |
Count of collected metrics |
go_memstats_heap_in_use_bytes |
Process go heap bytes in use |
go_memstats_heap_allocated_bytes |
Process go heap allocated bytes |
go_memstats_heap_idle_bytes |
Process go heap idle bytes |
go_memstats_heap_objects_total |
Process go heap number of objects |
go_memstats_heap_reserved_bytes |
Process go heap reserved bytes |
go_memstats_gc_cpu_fraction_ratio |
Process go cpu used by gc. value is between 0 and 1 |
go_routines_total |
Number of go-routines that currently exist |
jftrk_cache_topology_metrics_peers_total_free_cache_size_bytes |
Peers total free cache size |
jftrk_cache_topology_metrics_peers_average_cache_used_ratio |
Peers average cache used |
jftrk_cache_topology_metrics_peers_average_cache_free_ratio |
Peers average cache free |
jftrk_cache_topology_metrics_peers_average_max_total_cache_size_ratio |
Peers average max total cache size |
jftrk_cache_topology_metrics_number_of_peers_total |
Number of peers |
jftrk_cache_topology_metrics_number_of_groups_total |
Number of groups |
jftrk_cache_topology_metrics_peers_total_cache_used_bytes |
Peers total cache used |
jftrk_cache_topology_metrics_peers_total_max_cache_size_bytes |
Peers total max cache size |
jftrk_downloads_files_fetched_total |
Total number of files downloaded in PDN |
jftrk_downloads_bytes_served_total |
Total amount of bytes served to clients |
jftrk_downloads_bytes_fetched_total |
Total amount of bytes downloaded in PDN |
jftrk_downloads_release_bundles_total |
Total number of release bundles downloaded |
jftrk_downloads_file_providers_avg_ratio |
Average number of peers to download from per file |
jftrk_downloads_speed_kbps_avg_ratio |
Average download speed in PDN (Kbps) |
jftrk_downloads_errors_total |
Total download errors |
jftrk_downloads_files_served_total |
Total number of files served to clients |
sys_load_15 |
Host load average in the last 15 minutes |
sys_load_1 |
Host load average in the last minute |
sys_load_5 |
Host load average in the last 5 minutes |
ThePDN Node MetricsREST API returns the following metrics inOpen Metrics format.
Metric |
Description |
---|---|
|
所使用的tes for app home directory disk device |
|
Free bytes for app home directory disk device |
app_io_counters_error |
Error in the app io counter |
app_self_metrics_calc_seconds |
Total time to collect all metrics |
app_self_metrics_total |
Count of collected metrics |
go_memstats_heap_in_use_bytes |
Process go heap bytes in use |
go_memstats_heap_allocated_bytes |
Process go heap allocated bytes |
go_memstats_heap_idle_bytes |
Process go heap idle bytes |
go_memstats_heap_objects_total |
Process go heap number of objects |
go_memstats_heap_reserved_bytes |
Process go heap reserved bytes |
go_memstats_gc_cpu_fraction_ratio |
Process go cpu used by gc. value is between 0 and 1 |
go_routines_total |
Number of go-routines that currently exist |
jfpdn_cache_metrics_cache_used_bytes |
Cache used bytes |
jfpdn_cache_metrics_cache_maximum_files_total |
Cache maximum files |
|
Cache maximum bytes |
|
Cache used files |
|
Average download speed in PDN (Kbps) |
|
Total download errors |
|
Total number of files served to clients |
jfpdn_downloads_files_fetched_total |
Total number of files downloaded in PDN |
jfpdn_downloads_bytes_served_total |
Total amount of bytes served to clients |
jfpdn_downloads_bytes_fetched_total |
Total amount of bytes downloaded in PDN |
jfpdn_downloads_release_bundles_total |
Total number of release bundles downloaded |
|
Average number of peers to download from per file |
sys_load_15 |
Host load average in the last 15 minutes |
sys_load_1 |
Host load average in the last minute |
sys_load_5 |
Host load average in the last 5 minutes |
sys_memory_used_bytes |
Host used virtual memory |
sys_memory_free_bytes |
Host free virtual memory |
Pipelines Metrics
The following are the three metric log files created for Pipelines:
- Open Metrics Format:
- Pipeline API Metrics:
$JF_PRODUCT_HOME/var/log/api-metrics.log
- Pipeline API Metrics:
- Non-Open Metrics Format:
- Pipeline Reqsealer Event Metrics:
$JF_PRODUCT_HOME/var/log/reqsealer-activity-event.log
- Pipeline Sync Event Metrics:
$JF_PRODUCT_HOME/var/log/pipelinesync-activity-event.log
- Pipeline Reqsealer Event Metrics:
Open Metrics Format
TheGet Pipelines Metrics DataREST API returns the following metrics inOpen Metrics format.
Metric |
Description |
---|---|
sys_cpu_user_seconds |
User CPU usage time for thethe pipelineprocess in seconds |
sys_cpu_system_seconds |
System CPU usage time forthe pipelineprocess in seconds |
sys_cpu_total_seconds |
Total CPU usage time for the pipeline process in seconds |
nodejs_heap_read_only_space_total |
Total size allocated for V8 heap segment “read_only_space” |
nodejs_heap_read_only_space_used_total |
Used size for V8 heap segment “read_only_space” |
nodejs_heap_new_space_total |
Total size allocated for V8 heap segment “new_space” |
nodejs_heap_new_space_used_total |
Used size for V8 heap segment “new_space” |
nodejs_heap_old_space_total |
Total size allocated for V8 heap segment “old_space” |
nodejs_heap_old_space_used_total |
Used size for V8 heap segment “old_space” |
nodejs_heap_code_space_total |
Total size allocated for V8 heap segment “code_space” |
nodejs_heap_code_space_used_total |
Used size for V8 heap segment “code_space” |
nodejs_heap_map_space_total |
Total size allocated for V8 heap segment “max_space” |
nodejs_heap_map_space_used_total |
Used size for V8 heap segment “max_space” |
nodejs_heap_large_object_space_total |
Total size allocated for V8 heap segment “large_object_space” |
nodejs_heap_large_object_space_used_total |
Used size for V8 heap segment “large_object_space” |
nodejs_heap_code_large_object_space_total |
Total size allocated for V8 heap segment “code_large_object_space” |
nodejs_heap_code_large_object_space_used_total |
Used size for V8 heap segment “code_large_object_space” |
nodejs_heap_new_large_object_space_total |
Total size allocated for V8 heap segment “new_large_object_space” |
nodejs_heap_new_large_object_space_used_total |
Used size for V8 heap segment “new_large_object_space” |
sys_memory_free_bytes |
Host free virtual memory |
sys_memory_total_bytes |
主机总与启示l memory |
jfpip_pipelines_per_project_count In Pipelines 1.24 and prior, this is calledjfpip_pipelines_per_project_count_count. |
Number of Pipelines Per Project |
jfpip_pipelines_count In Pipelines 1.24 and prior, this is calledjfpip_pipelines_count_count. |
Number of Total Pipelines |
jfpip_queue_messages_total_count |
Messages Count for the Queue |
jfpip_nodepool_provisionstatus_success_count |
Number of node with SUCCESS provisioned status |
jfpip_nodepool_provisionstatus_cached_count |
Number of node with CACHED provisioned status |
jfpip_nodepool_provisionstatus_processing_count |
Number of node with PROCESSING provisioned status |
jfpip_nodepool_provisionstatus_failure_count |
Number of node with FAILURE provisioned status |
jfpip_nodepool_provisionstatus_waiting_count |
Number of node with WAITING provisioned status |
jfpip_concurrent_active_builds_count |
Active Concurrent Build Count |
jfpip_concurrent_allowed_builds_count |
Allowed Concurrent Build Count |
jfpip_concurrent_available_builds_count |
Available Concurrent Build Count |
Allnode.js
heap size statistics are captured using thev8.getHeapSpaceStatistics()API.
Logs
Theapi-metrics.log
will contain system metrics such as:
- Total disk space used
- Total disk space free
- Time CPU is used by the process
- Node JS Heap related information
Non-Open Metrics Format
In addition to the metrics mentioned about, Pipelines supports the following custom activity-based Event Metrics:
Pipeline Run & Step Events:
For every pipeline run, two types of metrics can be found inreqsealer-activity-event.log
. One entry for each step status and one entry for overall pipeline status.{"timestamp":"2022-04-05T08:30:10.088Z","startedAt":"2022-04-05T08:30:03.986Z","queuedAt":"2022-04-05T08:30:03.010Z","domain":"step","pipelineName":"my_pipeline_2","triggeredBy":"admin","branchName":"master","stepName":"p2_s1","runNumber":2,"status":"success","durationMillis":6102,"outputArtifactsCount":0,"outputResourcesCount":0} {"timestamp":"2022-04-05T08:30:10.088Z","startedAt":"2022-04-05T08:30:03.986Z","domain":"run","pipelineName":"my_pipeline_2","triggeredBy":"admin","branchName":"master","runNumber":2,"status":"success","durationMillis":6102}
Pipeline Sync Events:
For every pipeline sync activity, the following metrics can be found inpipelinesync-activity-event.log
.{"timestamp":"2022-04-06T10:00:45.673Z","domain":"pipelineSync","pipelineSourceName":"Sample","repositoryName":"a-0908/myFirstRepo","branch":"master","status":"success","durationMillis":10498}
Webhook Events (Pipelines 1.25 and above):
For every webhook activity for the supported SCMs, you will find the following metrics inhookhandler-activity-event.log
.{"timestamp":"2022-06-10T16:29:29.894Z","domain":"webhook","status":"success","durationMillis":533,"webhookId":"11819184-2d88-4180-92da-aa13092d0ca4","integration":"my_bitbucket","source":"gitrepo","eventType":"branchCreated","branchName":"kt4","repositoryName":"krishnakadiyam/jfrog-pipelines-second"} {"timestamp":"2022-06-10T16:29:40.845Z","domain":"webhook","status":"success","durationMillis":323,"webhookId":"6d098e3a-7b4b-427c-ba53-b1174baeeabd","integration":"my_bitbucket","source":"gitrepo","eventType":"branchDeleted","branchName":"kt4","repositoryName":"krishnakadiyam/jfrog-pipelines-second"} {"timestamp":"2022-06-13T05:29:55.062Z","domain":"webhook","status":"success","durationMillis":234,"webhookId":"2d4d698b-b083-42fd-a28e-670d9cec4c1a","integration":"glRepo","source":"gitrepo","eventType":"tag","repositoryName":"jfrog-pipelines-second","tagName":"refs/tags/kt4"}
Pipelines IntegrationsEvents (Pipelines 1.29 and above):
For every integrations activity, you will find the following metrics inapi-activity-event.log
.{"timestamp":"2022-11-10T10:36:50.004Z","domain":"projectIntegrations","eventType":"create","status":"success","integrationName":"iwh","integrationId":1,"integrationType":"incomingWebhook","createdBy":"admin","updatedBy":"admin","durationMillis":188} {"timestamp":"2022-11-10T10:37:43.423Z","domain":"projectIntegrations","eventType":"update","status":"success","integrationName":"iwh","integrationId":1,"integrationType":"incomingWebhook","createdBy":"admin","updatedBy":"admin","durationMillis":38} {"timestamp":"2022-11-10T10:37:55.901Z","domain":"projectIntegrations","eventType":"delete","status":"success","integrationName":"iwh","integrationId":"1","integrationType":"incomingWebhook","createdBy":"admin","updatedBy":"admin","durationMillis":85}
Usage Example - Prometheus
Update theprometheus.yml
file to add a scrape job. Update the following configuration with the adequate values:
job_name
: Use a unique name among other scrape jobs. All metrics collected through this job will have automatically a ‘job
’ label with this value added to itusername
: The name of an admin userpassword
: The admin passwordtargets
: The URL of the Artifactory node.
- job_name: 'artifactory' # Configures the protocol scheme used for requests. [scheme:| default = http] # Sets the `Authorization` header on every scrape request with # the configured credentials. authorization: [type: | default: Bearer] credentials: # metrics_path defaults to '/metrics' metrics_path: '/artifactory/api/v1/metrics' static_configs: - targets: [' : ']
For more information about Prometheus scrap job configuration, seehere.