mirror of
https://github.com/ptarmiganlabs/butler-sos.git
synced 2025-12-19 17:58:18 -05:00
fix(influxdb)!: Move InfluxDB maxBatchSize config setting to sit right under influxdbConfig section in YAML config file
This commit is contained in:
@@ -1,414 +0,0 @@
|
|||||||
# Butler SOS Insider Build Automatic Deployment Setup
|
|
||||||
|
|
||||||
This document describes the setup required to enable automatic deployment of Butler SOS insider builds to the testing server.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The GitHub Actions workflow `insiders-build.yaml` now includes automatic deployment of Windows insider builds to the `host2-win` server. After a successful build, the deployment job will:
|
|
||||||
|
|
||||||
1. Download the Windows installer build artifact
|
|
||||||
2. Stop the "Butler SOS insiders build" Windows service
|
|
||||||
3. Replace the binary with the new version
|
|
||||||
4. Start the service again
|
|
||||||
5. Verify the deployment was successful
|
|
||||||
|
|
||||||
## Manual Setup Required
|
|
||||||
|
|
||||||
### 1. GitHub Variables Configuration (Optional)
|
|
||||||
|
|
||||||
The deployment workflow supports configurable properties via GitHub repository variables. All have sensible defaults, so configuration is optional:
|
|
||||||
|
|
||||||
| Variable Name | Description | Default Value |
|
|
||||||
| ------------------------------------ | ---------------------------------------------------- | --------------------------- |
|
|
||||||
| `BUTLER_SOS_INSIDER_DEPLOY_RUNNER` | GitHub runner name/label to use for deployment | `host2-win` |
|
|
||||||
| `BUTLER_SOS_INSIDER_SERVICE_NAME` | Windows service name for Butler SOS | `Butler SOS insiders build` |
|
|
||||||
| `BUTLER_SOS_INSIDER_DEPLOY_PATH` | Directory path where to deploy the binary | `C:\butler-sos-insider` |
|
|
||||||
| `BUTLER_SOS_INSIDER_SERVICE_TIMEOUT` | Timeout in seconds for service stop/start operations | `30` |
|
|
||||||
| `BUTLER_SOS_INSIDER_DOWNLOAD_PATH` | Temporary download path for artifacts | `./download` |
|
|
||||||
|
|
||||||
**To configure GitHub variables:**
|
|
||||||
|
|
||||||
1. Go to your repository → Settings → Secrets and variables → Actions
|
|
||||||
2. Click on the "Variables" tab
|
|
||||||
3. Click "New repository variable"
|
|
||||||
4. Add any of the above variable names with your desired values
|
|
||||||
5. The workflow will automatically use these values, falling back to defaults if not set
|
|
||||||
|
|
||||||
**Example customization:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Set custom runner name
|
|
||||||
BUTLER_SOS_INSIDER_DEPLOY_RUNNER: "my-custom-runner"
|
|
||||||
|
|
||||||
# Use different service name
|
|
||||||
BUTLER_SOS_INSIDER_SERVICE_NAME: "Butler SOS Testing Service"
|
|
||||||
|
|
||||||
# Deploy to different directory
|
|
||||||
BUTLER_SOS_INSIDER_DEPLOY_PATH: "D:\Apps\butler-sos-test"
|
|
||||||
|
|
||||||
# Increase timeout for slower systems
|
|
||||||
BUTLER_SOS_INSIDER_SERVICE_TIMEOUT: "60"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. GitHub Runner Configuration
|
|
||||||
|
|
||||||
On the deployment server (default: `host2-win`, configurable via `BUTLER_SOS_INSIDER_DEPLOY_RUNNER` variable), ensure the GitHub runner is configured with:
|
|
||||||
|
|
||||||
**Runner Labels:**
|
|
||||||
|
|
||||||
- The runner must be labeled to match the `BUTLER_SOS_INSIDER_DEPLOY_RUNNER` variable value (default: `host2-win`)
|
|
||||||
|
|
||||||
**Permissions:**
|
|
||||||
|
|
||||||
- The runner service account must have permission to:
|
|
||||||
- Stop and start Windows services
|
|
||||||
- Write to the deployment directory (default: `C:\butler-sos-insider`, configurable via `BUTLER_SOS_INSIDER_DEPLOY_PATH`)
|
|
||||||
- Execute PowerShell scripts
|
|
||||||
|
|
||||||
**PowerShell Execution Policy:**
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Run as Administrator
|
|
||||||
Set-ExecutionPolicy RemoteSigned -Scope LocalMachine
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Windows Service Setup
|
|
||||||
|
|
||||||
Create a Windows service. The service name and deployment path can be customized via GitHub repository variables (see section 1).
|
|
||||||
|
|
||||||
**Default values:**
|
|
||||||
|
|
||||||
- Service Name: `"Butler SOS insiders build"` (configurable via `BUTLER_SOS_INSIDER_SERVICE_NAME`)
|
|
||||||
- Deploy Path: `C:\butler-sos-insider` (configurable via `BUTLER_SOS_INSIDER_DEPLOY_PATH`)
|
|
||||||
|
|
||||||
**Option A: Using NSSM (Non-Sucking Service Manager) - Recommended**
|
|
||||||
|
|
||||||
NSSM is a popular tool for creating Windows services from executables and provides better service management capabilities.
|
|
||||||
|
|
||||||
First, download and install NSSM:
|
|
||||||
|
|
||||||
1. Download NSSM from https://nssm.cc/download
|
|
||||||
2. Extract to a location like `C:\nssm`
|
|
||||||
3. Add `C:\nssm\win64` (or `win32`) to your system PATH
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
REM Run as Administrator
|
|
||||||
REM Install the service
|
|
||||||
nssm install "Butler SOS insiders build" "C:\butler-sos-insider\butler-sos.exe"
|
|
||||||
|
|
||||||
REM Set service parameters
|
|
||||||
nssm set "Butler SOS insiders build" AppParameters "--config C:\butler-sos-insider\config\production_template.yaml"
|
|
||||||
nssm set "Butler SOS insiders build" AppDirectory "C:\butler-sos-insider"
|
|
||||||
nssm set "Butler SOS insiders build" DisplayName "Butler SOS insiders build"
|
|
||||||
nssm set "Butler SOS insiders build" Description "Butler SOS insider build for testing"
|
|
||||||
nssm set "Butler SOS insiders build" Start SERVICE_DEMAND_START
|
|
||||||
|
|
||||||
REM Optional: Set up logging
|
|
||||||
nssm set "Butler SOS insiders build" AppStdout "C:\butler-sos-insider\logs\stdout.log"
|
|
||||||
nssm set "Butler SOS insiders build" AppStderr "C:\butler-sos-insider\logs\stderr.log"
|
|
||||||
|
|
||||||
REM Optional: Set service account (default is Local System)
|
|
||||||
REM nssm set "Butler SOS insiders build" ObjectName ".\ServiceAccount" "password"
|
|
||||||
```
|
|
||||||
|
|
||||||
**NSSM Service Management Commands:**
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
REM Start the service
|
|
||||||
nssm start "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Stop the service
|
|
||||||
nssm stop "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Restart the service
|
|
||||||
nssm restart "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Check service status
|
|
||||||
nssm status "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Remove the service (if needed)
|
|
||||||
nssm remove "Butler SOS insiders build" confirm
|
|
||||||
|
|
||||||
REM Edit service configuration
|
|
||||||
nssm edit "Butler SOS insiders build"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Using NSSM with PowerShell:**
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Run as Administrator
|
|
||||||
$serviceName = "Butler SOS insiders build"
|
|
||||||
$exePath = "C:\butler-sos-insider\butler-sos.exe"
|
|
||||||
$configPath = "C:\butler-sos-insider\config\production_template.yaml"
|
|
||||||
|
|
||||||
# Install service
|
|
||||||
& nssm install $serviceName $exePath
|
|
||||||
& nssm set $serviceName AppParameters "--config $configPath"
|
|
||||||
& nssm set $serviceName AppDirectory "C:\butler-sos-insider"
|
|
||||||
& nssm set $serviceName DisplayName $serviceName
|
|
||||||
& nssm set $serviceName Description "Butler SOS insider build for testing"
|
|
||||||
& nssm set $serviceName Start SERVICE_DEMAND_START
|
|
||||||
|
|
||||||
# Create logs directory
|
|
||||||
New-Item -ItemType Directory -Path "C:\butler-sos-insider\logs" -Force
|
|
||||||
|
|
||||||
# Set up logging
|
|
||||||
& nssm set $serviceName AppStdout "C:\butler-sos-insider\logs\stdout.log"
|
|
||||||
& nssm set $serviceName AppStderr "C:\butler-sos-insider\logs\stderr.log"
|
|
||||||
|
|
||||||
Write-Host "Service '$serviceName' installed successfully with NSSM"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option B: Using PowerShell**
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Run as Administrator
|
|
||||||
$serviceName = "Butler SOS insiders build"
|
|
||||||
$exePath = "C:\butler-sos-insider\butler-sos.exe"
|
|
||||||
$configPath = "C:\butler-sos-insider\config\production_template.yaml"
|
|
||||||
|
|
||||||
# Create the service
|
|
||||||
New-Service -Name $serviceName -BinaryPathName "$exePath --config $configPath" -DisplayName $serviceName -Description "Butler SOS insider build for testing" -StartupType Manual
|
|
||||||
|
|
||||||
# Set service to run as Local System or specify custom account
|
|
||||||
# For custom account:
|
|
||||||
# $credential = Get-Credential
|
|
||||||
# $service = Get-WmiObject -Class Win32_Service -Filter "Name='$serviceName'"
|
|
||||||
# $service.Change($null,$null,$null,$null,$null,$null,$credential.UserName,$credential.GetNetworkCredential().Password)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option C: Using SC command**
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
REM Run as Administrator
|
|
||||||
sc create "Butler SOS insiders build" binPath="C:\butler-sos-insider\butler-sos.exe --config C:\butler-sos-insider\config\production_template.yaml" DisplayName="Butler SOS insiders build" start=demand
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option C: Using Windows Service Manager (services.msc)**
|
|
||||||
|
|
||||||
1. Open Services management console
|
|
||||||
2. Right-click and select "Create Service"
|
|
||||||
3. Fill in the details:
|
|
||||||
- Service Name: `Butler SOS insiders build`
|
|
||||||
- Display Name: `Butler SOS insiders build`
|
|
||||||
- Path to executable: `C:\butler-sos-insider\butler-sos.exe`
|
|
||||||
- Startup Type: Manual or Automatic as preferred
|
|
||||||
|
|
||||||
**Option D: Using NSSM (Non-Sucking Service Manager) - Recommended**
|
|
||||||
|
|
||||||
NSSM is a popular tool for creating Windows services from executables and provides better service management capabilities.
|
|
||||||
|
|
||||||
First, download and install NSSM:
|
|
||||||
|
|
||||||
1. Download NSSM from https://nssm.cc/download
|
|
||||||
2. Extract to a location like `C:\nssm`
|
|
||||||
3. Add `C:\nssm\win64` (or `win32`) to your system PATH
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
REM Run as Administrator
|
|
||||||
REM Install the service
|
|
||||||
nssm install "Butler SOS insiders build" "C:\butler-sos-insider\butler-sos.exe"
|
|
||||||
|
|
||||||
REM Set service parameters
|
|
||||||
nssm set "Butler SOS insiders build" AppParameters "--config C:\butler-sos-insider\config\production_template.yaml"
|
|
||||||
nssm set "Butler SOS insiders build" AppDirectory "C:\butler-sos-insider"
|
|
||||||
nssm set "Butler SOS insiders build" DisplayName "Butler SOS insiders build"
|
|
||||||
nssm set "Butler SOS insiders build" Description "Butler SOS insider build for testing"
|
|
||||||
nssm set "Butler SOS insiders build" Start SERVICE_DEMAND_START
|
|
||||||
|
|
||||||
REM Optional: Set up logging
|
|
||||||
nssm set "Butler SOS insiders build" AppStdout "C:\butler-sos-insider\logs\stdout.log"
|
|
||||||
nssm set "Butler SOS insiders build" AppStderr "C:\butler-sos-insider\logs\stderr.log"
|
|
||||||
|
|
||||||
REM Optional: Set service account (default is Local System)
|
|
||||||
REM nssm set "Butler SOS insiders build" ObjectName ".\ServiceAccount" "password"
|
|
||||||
```
|
|
||||||
|
|
||||||
**NSSM Service Management Commands:**
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
REM Start the service
|
|
||||||
nssm start "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Stop the service
|
|
||||||
nssm stop "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Restart the service
|
|
||||||
nssm restart "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Check service status
|
|
||||||
nssm status "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Remove the service (if needed)
|
|
||||||
nssm remove "Butler SOS insiders build" confirm
|
|
||||||
|
|
||||||
REM Edit service configuration
|
|
||||||
nssm edit "Butler SOS insiders build"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Using NSSM with PowerShell:**
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Run as Administrator
|
|
||||||
$serviceName = "Butler SOS insiders build"
|
|
||||||
$exePath = "C:\butler-sos-insider\butler-sos.exe"
|
|
||||||
$configPath = "C:\butler-sos-insider\config\production_template.yaml"
|
|
||||||
|
|
||||||
# Install service
|
|
||||||
& nssm install $serviceName $exePath
|
|
||||||
& nssm set $serviceName AppParameters "--config $configPath"
|
|
||||||
& nssm set $serviceName AppDirectory "C:\butler-sos-insider"
|
|
||||||
& nssm set $serviceName DisplayName $serviceName
|
|
||||||
& nssm set $serviceName Description "Butler SOS insider build for testing"
|
|
||||||
& nssm set $serviceName Start SERVICE_DEMAND_START
|
|
||||||
|
|
||||||
# Create logs directory
|
|
||||||
New-Item -ItemType Directory -Path "C:\butler-sos-insider\logs" -Force
|
|
||||||
|
|
||||||
# Set up logging
|
|
||||||
& nssm set $serviceName AppStdout "C:\butler-sos-insider\logs\stdout.log"
|
|
||||||
& nssm set $serviceName AppStderr "C:\butler-sos-insider\logs\stderr.log"
|
|
||||||
|
|
||||||
Write-Host "Service '$serviceName' installed successfully with NSSM"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Directory Setup
|
|
||||||
|
|
||||||
Create the deployment directory with proper permissions:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Run as Administrator
|
|
||||||
$deployPath = "C:\butler-sos-insider"
|
|
||||||
$runnerUser = "NT SERVICE\github-runner" # Adjust based on your runner service account
|
|
||||||
|
|
||||||
# Create directory
|
|
||||||
New-Item -ItemType Directory -Path $deployPath -Force
|
|
||||||
|
|
||||||
# Grant permissions to the runner service account
|
|
||||||
$acl = Get-Acl $deployPath
|
|
||||||
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($runnerUser, "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")
|
|
||||||
$acl.SetAccessRule($accessRule)
|
|
||||||
Set-Acl -Path $deployPath -AclObject $acl
|
|
||||||
|
|
||||||
Write-Host "Directory created and permissions set for: $deployPath"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Service Permissions
|
|
||||||
|
|
||||||
Grant the GitHub runner service account permission to manage the Butler SOS service:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Run as Administrator
|
|
||||||
# Download and use the SubInACL tool or use PowerShell with .NET classes
|
|
||||||
|
|
||||||
# Option A: Using PowerShell (requires additional setup)
|
|
||||||
$serviceName = "Butler SOS insiders build"
|
|
||||||
$runnerUser = "NT SERVICE\github-runner" # Adjust based on your runner service account
|
|
||||||
|
|
||||||
# This is a simplified example - you may need more advanced permission management
|
|
||||||
# depending on your security requirements
|
|
||||||
|
|
||||||
Write-Host "Service permissions need to be configured manually using Group Policy or SubInACL"
|
|
||||||
Write-Host "Grant '$runnerUser' the following rights:"
|
|
||||||
Write-Host "- Log on as a service"
|
|
||||||
Write-Host "- Start and stop services"
|
|
||||||
Write-Host "- Manage service permissions for '$serviceName'"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing the Deployment
|
|
||||||
|
|
||||||
### Manual Test
|
|
||||||
|
|
||||||
To manually test the deployment process:
|
|
||||||
|
|
||||||
1. Trigger the insider build workflow in GitHub Actions
|
|
||||||
2. Monitor the workflow logs for the `deploy-windows-insider` job
|
|
||||||
3. Check that the service stops and starts properly
|
|
||||||
4. Verify the new binary is deployed to `C:\butler-sos-insider`
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
|
|
||||||
**Common Issues:**
|
|
||||||
|
|
||||||
1. **Service not found:**
|
|
||||||
- Ensure the service name is exactly `"Butler SOS insiders build"`
|
|
||||||
- Check that the service was created successfully
|
|
||||||
- If using NSSM: `nssm status "Butler SOS insiders build"`
|
|
||||||
|
|
||||||
2. **Permission denied:**
|
|
||||||
- Verify the GitHub runner has service management permissions
|
|
||||||
- Check directory permissions for `C:\butler-sos-insider`
|
|
||||||
- If using NSSM: Ensure NSSM is in system PATH and accessible to the runner account
|
|
||||||
|
|
||||||
3. **Service won't start:**
|
|
||||||
- Check the service configuration and binary path
|
|
||||||
- Review Windows Event Logs for service startup errors
|
|
||||||
- Ensure the configuration file is present and valid
|
|
||||||
- **If using NSSM:**
|
|
||||||
- Check service configuration: `nssm get "Butler SOS insiders build" AppDirectory`
|
|
||||||
- Check parameters: `nssm get "Butler SOS insiders build" AppParameters`
|
|
||||||
- Review NSSM logs in `C:\butler-sos-insider\logs\` (if configured)
|
|
||||||
- Use `nssm edit "Butler SOS insiders build"` to open the GUI editor
|
|
||||||
|
|
||||||
4. **GitHub Runner not found:**
|
|
||||||
- Verify the runner is labeled as `host2-win`
|
|
||||||
- Ensure the runner is online and accepting jobs
|
|
||||||
|
|
||||||
5. **NSSM-specific issues:**
|
|
||||||
- **NSSM not found:** Ensure NSSM is installed and in system PATH
|
|
||||||
- **Service already exists:** Use `nssm remove "Butler SOS insiders build" confirm` to remove and recreate
|
|
||||||
- **Wrong parameters:** Use `nssm set "Butler SOS insiders build" AppParameters "new-parameters"`
|
|
||||||
- **Logging issues:** Verify the logs directory exists and has write permissions
|
|
||||||
|
|
||||||
**NSSM Diagnostic Commands:**
|
|
||||||
|
|
||||||
```cmd
|
|
||||||
REM Check if NSSM is available
|
|
||||||
nssm version
|
|
||||||
|
|
||||||
REM Get all service parameters
|
|
||||||
nssm dump "Butler SOS insiders build"
|
|
||||||
|
|
||||||
REM Check specific configuration
|
|
||||||
nssm get "Butler SOS insiders build" Application
|
|
||||||
nssm get "Butler SOS insiders build" AppDirectory
|
|
||||||
nssm get "Butler SOS insiders build" AppParameters
|
|
||||||
nssm get "Butler SOS insiders build" Start
|
|
||||||
|
|
||||||
REM View service status
|
|
||||||
nssm status "Butler SOS insiders build"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Log Locations:**
|
|
||||||
|
|
||||||
- GitHub Actions logs: Available in the workflow run details
|
|
||||||
- Windows Event Logs: Check System and Application logs
|
|
||||||
- Service logs: Check Butler SOS application logs if configured
|
|
||||||
- **NSSM logs** (if using NSSM with logging enabled):
|
|
||||||
- stdout: `C:\butler-sos-insider\logs\stdout.log`
|
|
||||||
- stderr: `C:\butler-sos-insider\logs\stderr.log`
|
|
||||||
|
|
||||||
## Configuration Files
|
|
||||||
|
|
||||||
The deployment includes the configuration template and log appender files in the zip package:
|
|
||||||
|
|
||||||
- `config/production_template.yaml` - Main configuration template
|
|
||||||
- `config/log_appender_xml/` - Log4j configuration files
|
|
||||||
|
|
||||||
Adjust the service binary path to point to your actual configuration file location if different from the template.
|
|
||||||
|
|
||||||
## Security Considerations
|
|
||||||
|
|
||||||
- The deployment uses PowerShell scripts with `continue-on-error: true` to prevent workflow failures
|
|
||||||
- Service management requires elevated permissions - ensure the GitHub runner runs with appropriate privileges
|
|
||||||
- Consider using a dedicated service account rather than Local System for better security
|
|
||||||
- Monitor deployment logs for any security-related issues
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
If you encounter issues with the automatic deployment:
|
|
||||||
|
|
||||||
1. Check the GitHub Actions workflow logs for detailed error messages
|
|
||||||
2. Verify the manual setup steps were completed correctly
|
|
||||||
3. Test service operations manually before relying on automation
|
|
||||||
4. Consider running a test deployment on a non-production system first
|
|
||||||
@@ -506,28 +506,26 @@ Butler-SOS:
|
|||||||
host: influxdb.mycompany.com # InfluxDB host, hostname, FQDN or IP address
|
host: influxdb.mycompany.com # InfluxDB host, hostname, FQDN or IP address
|
||||||
port: 8086 # Port where InfluxDBdb is listening, usually 8086
|
port: 8086 # Port where InfluxDBdb is listening, usually 8086
|
||||||
version: 2 # Is the InfluxDB instance version 1.x or 2.x? Valid values are 1, 2, or 3
|
version: 2 # Is the InfluxDB instance version 1.x or 2.x? Valid values are 1, 2, or 3
|
||||||
|
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
|
||||||
v3Config: # Settings for InfluxDB v3.x only, i.e. Butler-SOS.influxdbConfig.version=3
|
v3Config: # Settings for InfluxDB v3.x only, i.e. Butler-SOS.influxdbConfig.version=3
|
||||||
database: mydatabase
|
database: mydatabase
|
||||||
description: Butler SOS metrics
|
description: Butler SOS metrics
|
||||||
token: mytoken
|
token: mytoken
|
||||||
retentionDuration: 10d
|
retentionDuration: 10d
|
||||||
timeout: 10000 # Optional: Socket timeout in milliseconds (default: 10000)
|
timeout: 10000 # Optional: Socket timeout in milliseconds (writing to InfluxDB) (default: 10000)
|
||||||
queryTimeout: 60000 # Optional: Query timeout in milliseconds (default: 60000)
|
queryTimeout: 60000 # Optional: Query timeout in milliseconds (default: 60000)
|
||||||
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
|
|
||||||
v2Config: # Settings for InfluxDB v2.x only, i.e. Butler-SOS.influxdbConfig.version=2
|
v2Config: # Settings for InfluxDB v2.x only, i.e. Butler-SOS.influxdbConfig.version=2
|
||||||
org: myorg
|
org: myorg
|
||||||
bucket: mybucket
|
bucket: mybucket
|
||||||
description: Butler SOS metrics
|
description: Butler SOS metrics
|
||||||
token: mytoken
|
token: mytoken
|
||||||
retentionDuration: 10d
|
retentionDuration: 10d
|
||||||
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
|
|
||||||
v1Config: # Settings below are for InfluxDB v1.x only, i.e. Butler-SOS.influxdbConfig.version=1
|
v1Config: # Settings below are for InfluxDB v1.x only, i.e. Butler-SOS.influxdbConfig.version=1
|
||||||
auth:
|
auth:
|
||||||
enable: false # Does influxdb instance require authentication (true/false)?
|
enable: false # Does influxdb instance require authentication (true/false)?
|
||||||
username: <username> # Username for Influxdb authentication. Mandatory if auth.enable=true
|
username: <username> # Username for Influxdb authentication. Mandatory if auth.enable=true
|
||||||
password: <password> # Password for Influxdb authentication. Mandatory if auth.enable=true
|
password: <password> # Password for Influxdb authentication. Mandatory if auth.enable=true
|
||||||
dbName: senseops
|
dbName: senseops
|
||||||
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
|
|
||||||
# Default retention policy that should be created in InfluxDB when Butler SOS creates a new database there.
|
# Default retention policy that should be created in InfluxDB when Butler SOS creates a new database there.
|
||||||
# Any data older than retention policy threshold will be purged from InfluxDB.
|
# Any data older than retention policy threshold will be purged from InfluxDB.
|
||||||
retentionPolicy:
|
retentionPolicy:
|
||||||
|
|||||||
@@ -179,9 +179,8 @@ export async function verifyAppConfig(cfg) {
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Validate and set default for maxBatchSize based on version
|
// Validate and set default for maxBatchSize
|
||||||
const versionConfig = `v${influxdbVersion}Config`;
|
const maxBatchSizePath = `Butler-SOS.influxdbConfig.maxBatchSize`;
|
||||||
const maxBatchSizePath = `Butler-SOS.influxdbConfig.${versionConfig}.maxBatchSize`;
|
|
||||||
|
|
||||||
if (cfg.has(maxBatchSizePath)) {
|
if (cfg.has(maxBatchSizePath)) {
|
||||||
const maxBatchSize = cfg.get(maxBatchSizePath);
|
const maxBatchSize = cfg.get(maxBatchSizePath);
|
||||||
|
|||||||
@@ -316,6 +316,14 @@ export const destinationsSchema = {
|
|||||||
},
|
},
|
||||||
port: { type: 'number' },
|
port: { type: 'number' },
|
||||||
version: { type: 'number' },
|
version: { type: 'number' },
|
||||||
|
maxBatchSize: {
|
||||||
|
type: 'number',
|
||||||
|
description:
|
||||||
|
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
|
||||||
|
default: 1000,
|
||||||
|
minimum: 1,
|
||||||
|
maximum: 10000,
|
||||||
|
},
|
||||||
v3Config: {
|
v3Config: {
|
||||||
type: 'object',
|
type: 'object',
|
||||||
properties: {
|
properties: {
|
||||||
@@ -335,14 +343,6 @@ export const destinationsSchema = {
|
|||||||
default: 60000,
|
default: 60000,
|
||||||
minimum: 1000,
|
minimum: 1000,
|
||||||
},
|
},
|
||||||
maxBatchSize: {
|
|
||||||
type: 'number',
|
|
||||||
description:
|
|
||||||
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
|
|
||||||
default: 1000,
|
|
||||||
minimum: 1,
|
|
||||||
maximum: 10000,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
required: ['database', 'description', 'token', 'retentionDuration'],
|
required: ['database', 'description', 'token', 'retentionDuration'],
|
||||||
additionalProperties: false,
|
additionalProperties: false,
|
||||||
@@ -355,14 +355,6 @@ export const destinationsSchema = {
|
|||||||
description: { type: 'string' },
|
description: { type: 'string' },
|
||||||
token: { type: 'string' },
|
token: { type: 'string' },
|
||||||
retentionDuration: { type: 'string' },
|
retentionDuration: { type: 'string' },
|
||||||
maxBatchSize: {
|
|
||||||
type: 'number',
|
|
||||||
description:
|
|
||||||
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
|
|
||||||
default: 1000,
|
|
||||||
minimum: 1,
|
|
||||||
maximum: 10000,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
required: ['org', 'bucket', 'description', 'token', 'retentionDuration'],
|
required: ['org', 'bucket', 'description', 'token', 'retentionDuration'],
|
||||||
additionalProperties: false,
|
additionalProperties: false,
|
||||||
@@ -393,14 +385,6 @@ export const destinationsSchema = {
|
|||||||
required: ['name', 'duration'],
|
required: ['name', 'duration'],
|
||||||
additionalProperties: false,
|
additionalProperties: false,
|
||||||
},
|
},
|
||||||
maxBatchSize: {
|
|
||||||
type: 'number',
|
|
||||||
description:
|
|
||||||
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
|
|
||||||
default: 1000,
|
|
||||||
minimum: 1,
|
|
||||||
maximum: 10000,
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
required: ['auth', 'dbName', 'retentionPolicy'],
|
required: ['auth', 'dbName', 'retentionPolicy'],
|
||||||
additionalProperties: false,
|
additionalProperties: false,
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV1: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -50,7 +51,11 @@ describe('v1/butler-memory', () => {
|
|||||||
|
|
||||||
// Setup default mocks
|
// Setup default mocks
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeBatchToInfluxV1.mockResolvedValue();
|
||||||
|
globals.config.get.mockImplementation((key) => {
|
||||||
|
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
|
||||||
|
return undefined;
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('storeButlerMemoryV1', () => {
|
describe('storeButlerMemoryV1', () => {
|
||||||
@@ -67,7 +72,7 @@ describe('v1/butler-memory', () => {
|
|||||||
|
|
||||||
await storeButlerMemoryV1(memory);
|
await storeButlerMemoryV1(memory);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
expect(globals.logger.debug).toHaveBeenCalledWith(
|
expect(globals.logger.debug).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('MEMORY USAGE V1')
|
expect.stringContaining('MEMORY USAGE V1')
|
||||||
);
|
);
|
||||||
@@ -84,11 +89,11 @@ describe('v1/butler-memory', () => {
|
|||||||
|
|
||||||
await storeButlerMemoryV1(memory);
|
await storeButlerMemoryV1(memory);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
'Memory usage metrics',
|
'Memory usage metrics',
|
||||||
'v1',
|
'INFLUXDB_V1_WRITE',
|
||||||
''
|
100
|
||||||
);
|
);
|
||||||
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
||||||
'MEMORY USAGE V1: Sent Butler SOS memory usage data to InfluxDB'
|
'MEMORY USAGE V1: Sent Butler SOS memory usage data to InfluxDB'
|
||||||
@@ -104,27 +109,49 @@ describe('v1/butler-memory', () => {
|
|||||||
processMemoryMByte: 350.5,
|
processMemoryMByte: 350.5,
|
||||||
};
|
};
|
||||||
|
|
||||||
utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
|
utils.writeBatchToInfluxV1.mockImplementation(async (writeFn) => {
|
||||||
await writeFn();
|
// writeFn is the batch array in the new implementation
|
||||||
|
// But wait, mockImplementation receives the arguments passed to the function.
|
||||||
|
// writeBatchToInfluxV1(batch, logMessage, instanceTag, batchSize)
|
||||||
|
// So the first argument is the batch.
|
||||||
|
// The test expects globals.influx.writePoints to be called.
|
||||||
|
// But writeBatchToInfluxV1 calls globals.influx.writePoints internally.
|
||||||
|
// If we mock writeBatchToInfluxV1, we bypass the internal call.
|
||||||
|
// So we should NOT mock implementation if we want to test the datapoint structure via globals.influx.writePoints?
|
||||||
|
// Or we should inspect the batch passed to writeBatchToInfluxV1.
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// The original test was:
|
||||||
|
// utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
|
||||||
|
// await writeFn();
|
||||||
|
// });
|
||||||
|
// Because writeToInfluxWithRetry took a function that generated points and wrote them.
|
||||||
|
|
||||||
|
// Now writeBatchToInfluxV1 takes the points directly.
|
||||||
|
// So we can just inspect the arguments of writeBatchToInfluxV1.
|
||||||
|
|
||||||
await storeButlerMemoryV1(memory);
|
await storeButlerMemoryV1(memory);
|
||||||
|
|
||||||
expect(globals.influx.writePoints).toHaveBeenCalledWith([
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
{
|
expect.arrayContaining([
|
||||||
measurement: 'butlersos_memory_usage',
|
expect.objectContaining({
|
||||||
tags: {
|
measurement: 'butlersos_memory_usage',
|
||||||
butler_sos_instance: 'test-instance',
|
tags: {
|
||||||
version: '1.0.0',
|
butler_sos_instance: 'test-instance',
|
||||||
},
|
version: '1.0.0',
|
||||||
fields: {
|
},
|
||||||
heap_used: 150.5,
|
fields: {
|
||||||
heap_total: 300.75,
|
heap_used: 150.5,
|
||||||
external: 75.25,
|
heap_total: 300.75,
|
||||||
process_memory: 350.5,
|
external: 75.25,
|
||||||
},
|
process_memory: 350.5,
|
||||||
},
|
},
|
||||||
]);
|
})
|
||||||
|
]),
|
||||||
|
expect.any(String),
|
||||||
|
expect.any(String),
|
||||||
|
expect.any(Number)
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors and rethrow', async () => {
|
test('should handle write errors and rethrow', async () => {
|
||||||
@@ -137,7 +164,7 @@ describe('v1/butler-memory', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV1.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await expect(storeButlerMemoryV1(memory)).rejects.toThrow('Write failed');
|
await expect(storeButlerMemoryV1(memory)).rejects.toThrow('Write failed');
|
||||||
|
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV1: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -43,6 +44,7 @@ describe('v1/event-counts', () => {
|
|||||||
globals.config.get.mockImplementation((path) => {
|
globals.config.get.mockImplementation((path) => {
|
||||||
if (path.includes('measurementName')) return 'event_counts';
|
if (path.includes('measurementName')) return 'event_counts';
|
||||||
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -58,6 +60,7 @@ describe('v1/event-counts', () => {
|
|||||||
|
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeToInfluxWithRetry.mockResolvedValue();
|
||||||
|
utils.writeBatchToInfluxV1.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when no events', async () => {
|
test('should return early when no events', async () => {
|
||||||
@@ -70,16 +73,16 @@ describe('v1/event-counts', () => {
|
|||||||
test('should return early when InfluxDB disabled', async () => {
|
test('should return early when InfluxDB disabled', async () => {
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
await storeEventCountV1();
|
await storeEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write event counts', async () => {
|
test('should write event counts', async () => {
|
||||||
await storeEventCountV1();
|
await storeEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
'Event counts',
|
'Event counts',
|
||||||
'v1',
|
'',
|
||||||
''
|
100
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -90,7 +93,7 @@ describe('v1/event-counts', () => {
|
|||||||
]);
|
]);
|
||||||
globals.udpEvents.getUserEvents.mockResolvedValue([]);
|
globals.udpEvents.getUserEvents.mockResolvedValue([]);
|
||||||
await storeEventCountV1();
|
await storeEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should apply config tags to user events', async () => {
|
test('should apply config tags to user events', async () => {
|
||||||
@@ -100,7 +103,7 @@ describe('v1/event-counts', () => {
|
|||||||
{ source: 'qseow-proxy', host: 'host2', subsystem: 'Session', counter: 7 },
|
{ source: 'qseow-proxy', host: 'host2', subsystem: 'Session', counter: 7 },
|
||||||
]);
|
]);
|
||||||
await storeEventCountV1();
|
await storeEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle mixed log and user events', async () => {
|
test('should handle mixed log and user events', async () => {
|
||||||
@@ -111,29 +114,29 @@ describe('v1/event-counts', () => {
|
|||||||
{ source: 'qseow-proxy', host: 'host2', subsystem: 'User', counter: 3 },
|
{ source: 'qseow-proxy', host: 'host2', subsystem: 'User', counter: 3 },
|
||||||
]);
|
]);
|
||||||
await storeEventCountV1();
|
await storeEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
'Event counts',
|
'Event counts',
|
||||||
'v1',
|
'',
|
||||||
''
|
100
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
|
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
|
||||||
await expect(storeEventCountV1()).rejects.toThrow();
|
await expect(storeEventCountV1()).rejects.toThrow();
|
||||||
expect(globals.logger.error).toHaveBeenCalled();
|
expect(globals.logger.error).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write rejected event counts', async () => {
|
test('should write rejected event counts', async () => {
|
||||||
await storeRejectedEventCountV1();
|
await storeRejectedEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when no rejected events', async () => {
|
test('should return early when no rejected events', async () => {
|
||||||
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue([]);
|
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue([]);
|
||||||
await storeRejectedEventCountV1();
|
await storeRejectedEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when InfluxDB disabled for rejected events', async () => {
|
test('should return early when InfluxDB disabled for rejected events', async () => {
|
||||||
@@ -142,7 +145,7 @@ describe('v1/event-counts', () => {
|
|||||||
]);
|
]);
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
await storeRejectedEventCountV1();
|
await storeRejectedEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle rejected qix-perf events with appName', async () => {
|
test('should handle rejected qix-perf events with appName', async () => {
|
||||||
@@ -163,7 +166,7 @@ describe('v1/event-counts', () => {
|
|||||||
return null;
|
return null;
|
||||||
});
|
});
|
||||||
await storeRejectedEventCountV1();
|
await storeRejectedEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle rejected qix-perf events without appName', async () => {
|
test('should handle rejected qix-perf events without appName', async () => {
|
||||||
@@ -180,7 +183,7 @@ describe('v1/event-counts', () => {
|
|||||||
]);
|
]);
|
||||||
globals.config.has.mockReturnValue(false);
|
globals.config.has.mockReturnValue(false);
|
||||||
await storeRejectedEventCountV1();
|
await storeRejectedEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle rejected non-qix-perf events', async () => {
|
test('should handle rejected non-qix-perf events', async () => {
|
||||||
@@ -193,14 +196,14 @@ describe('v1/event-counts', () => {
|
|||||||
},
|
},
|
||||||
]);
|
]);
|
||||||
await storeRejectedEventCountV1();
|
await storeRejectedEventCountV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle rejected events write errors', async () => {
|
test('should handle rejected events write errors', async () => {
|
||||||
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue([
|
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue([
|
||||||
{ source: 'test', counter: 1 },
|
{ source: 'test', counter: 1 },
|
||||||
]);
|
]);
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
|
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
|
||||||
await expect(storeRejectedEventCountV1()).rejects.toThrow();
|
await expect(storeRejectedEventCountV1()).rejects.toThrow();
|
||||||
expect(globals.logger.error).toHaveBeenCalled();
|
expect(globals.logger.error).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -23,6 +23,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV1: jest.fn(),
|
||||||
processAppDocuments: jest.fn(),
|
processAppDocuments: jest.fn(),
|
||||||
getFormattedTime: jest.fn(() => '2024-01-01T00:00:00Z'),
|
getFormattedTime: jest.fn(() => '2024-01-01T00:00:00Z'),
|
||||||
};
|
};
|
||||||
@@ -43,11 +44,13 @@ describe('v1/health-metrics', () => {
|
|||||||
globals.config.get.mockImplementation((path) => {
|
globals.config.get.mockImplementation((path) => {
|
||||||
if (path.includes('measurementName')) return 'health_metrics';
|
if (path.includes('measurementName')) return 'health_metrics';
|
||||||
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
|
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeToInfluxWithRetry.mockResolvedValue();
|
||||||
|
utils.writeBatchToInfluxV1.mockResolvedValue();
|
||||||
utils.processAppDocuments.mockResolvedValue({ appNames: [], sessionAppNames: [] });
|
utils.processAppDocuments.mockResolvedValue({ appNames: [], sessionAppNames: [] });
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -55,7 +58,7 @@ describe('v1/health-metrics', () => {
|
|||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
const body = { mem: {}, apps: {}, cpu: {}, session: {}, users: {}, cache: {} };
|
const body = { mem: {}, apps: {}, cpu: {}, session: {}, users: {}, cache: {} };
|
||||||
await storeHealthMetricsV1({ server: 'server1' }, body);
|
await storeHealthMetricsV1({ server: 'server1' }, body);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write complete health metrics', async () => {
|
test('should write complete health metrics', async () => {
|
||||||
@@ -73,17 +76,24 @@ describe('v1/health-metrics', () => {
|
|||||||
users: { active: 3, total: 8 },
|
users: { active: 3, total: 8 },
|
||||||
cache: { hits: 100, lookups: 120, added: 20, replaced: 5, bytes_added: 1024 },
|
cache: { hits: 100, lookups: 120, added: 20, replaced: 5, bytes_added: 1024 },
|
||||||
saturated: false,
|
saturated: false,
|
||||||
|
version: '1.0.0',
|
||||||
|
started: '2024-01-01T00:00:00Z',
|
||||||
};
|
};
|
||||||
const serverTags = { server_name: 'server1', server_description: 'Test server' };
|
const serverTags = { server_name: 'server1', server_description: 'Test server' };
|
||||||
|
|
||||||
await storeHealthMetricsV1(serverTags, body);
|
await storeHealthMetricsV1(serverTags, body);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
|
expect.any(Array),
|
||||||
|
'Health metrics for server1',
|
||||||
|
'server1',
|
||||||
|
100
|
||||||
|
);
|
||||||
expect(utils.processAppDocuments).toHaveBeenCalledTimes(3);
|
expect(utils.processAppDocuments).toHaveBeenCalledTimes(3);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
|
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
|
||||||
const body = {
|
const body = {
|
||||||
mem: {},
|
mem: {},
|
||||||
apps: { active_docs: [], loaded_docs: [], in_memory_docs: [] },
|
apps: { active_docs: [], loaded_docs: [], in_memory_docs: [] },
|
||||||
@@ -92,7 +102,7 @@ describe('v1/health-metrics', () => {
|
|||||||
users: {},
|
users: {},
|
||||||
cache: {},
|
cache: {},
|
||||||
};
|
};
|
||||||
await expect(storeHealthMetricsV1({}, body)).rejects.toThrow();
|
await expect(storeHealthMetricsV1({ server_name: 'server1' }, body)).rejects.toThrow();
|
||||||
expect(globals.logger.error).toHaveBeenCalled();
|
expect(globals.logger.error).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -108,8 +118,10 @@ describe('v1/health-metrics', () => {
|
|||||||
session: {},
|
session: {},
|
||||||
users: {},
|
users: {},
|
||||||
cache: {},
|
cache: {},
|
||||||
|
version: '1.0.0',
|
||||||
|
started: '2024-01-01T00:00:00Z',
|
||||||
};
|
};
|
||||||
await storeHealthMetricsV1({}, body);
|
await storeHealthMetricsV1({ server_name: 'server1' }, body);
|
||||||
expect(utils.processAppDocuments).toHaveBeenCalledTimes(3);
|
expect(utils.processAppDocuments).toHaveBeenCalledTimes(3);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -119,6 +131,7 @@ describe('v1/health-metrics', () => {
|
|||||||
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
||||||
if (path.includes('includeFields.activeDocs')) return true;
|
if (path.includes('includeFields.activeDocs')) return true;
|
||||||
if (path.includes('enableAppNameExtract')) return true;
|
if (path.includes('enableAppNameExtract')) return true;
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
utils.processAppDocuments.mockResolvedValue({
|
utils.processAppDocuments.mockResolvedValue({
|
||||||
@@ -132,9 +145,11 @@ describe('v1/health-metrics', () => {
|
|||||||
session: { active: 5 },
|
session: { active: 5 },
|
||||||
users: { active: 3 },
|
users: { active: 3 },
|
||||||
cache: { hits: 100 },
|
cache: { hits: 100 },
|
||||||
|
version: '1.0.0',
|
||||||
|
started: '2024-01-01T00:00:00Z',
|
||||||
};
|
};
|
||||||
await storeHealthMetricsV1({}, body);
|
await storeHealthMetricsV1({ server_name: 'server1' }, body);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle config with loadedDocs enabled', async () => {
|
test('should handle config with loadedDocs enabled', async () => {
|
||||||
@@ -143,6 +158,7 @@ describe('v1/health-metrics', () => {
|
|||||||
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
||||||
if (path.includes('includeFields.loadedDocs')) return true;
|
if (path.includes('includeFields.loadedDocs')) return true;
|
||||||
if (path.includes('enableAppNameExtract')) return true;
|
if (path.includes('enableAppNameExtract')) return true;
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
utils.processAppDocuments.mockResolvedValue({
|
utils.processAppDocuments.mockResolvedValue({
|
||||||
@@ -156,9 +172,11 @@ describe('v1/health-metrics', () => {
|
|||||||
session: { active: 5 },
|
session: { active: 5 },
|
||||||
users: { active: 3 },
|
users: { active: 3 },
|
||||||
cache: { hits: 100 },
|
cache: { hits: 100 },
|
||||||
|
version: '1.0.0',
|
||||||
|
started: '2024-01-01T00:00:00Z',
|
||||||
};
|
};
|
||||||
await storeHealthMetricsV1({}, body);
|
await storeHealthMetricsV1({ server_name: 'server1' }, body);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle config with inMemoryDocs enabled', async () => {
|
test('should handle config with inMemoryDocs enabled', async () => {
|
||||||
@@ -167,6 +185,7 @@ describe('v1/health-metrics', () => {
|
|||||||
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
|
||||||
if (path.includes('includeFields.inMemoryDocs')) return true;
|
if (path.includes('includeFields.inMemoryDocs')) return true;
|
||||||
if (path.includes('enableAppNameExtract')) return true;
|
if (path.includes('enableAppNameExtract')) return true;
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
utils.processAppDocuments.mockResolvedValue({
|
utils.processAppDocuments.mockResolvedValue({
|
||||||
@@ -180,9 +199,11 @@ describe('v1/health-metrics', () => {
|
|||||||
session: { active: 5 },
|
session: { active: 5 },
|
||||||
users: { active: 3 },
|
users: { active: 3 },
|
||||||
cache: { hits: 100 },
|
cache: { hits: 100 },
|
||||||
|
version: '1.0.0',
|
||||||
|
started: '2024-01-01T00:00:00Z',
|
||||||
};
|
};
|
||||||
await storeHealthMetricsV1({}, body);
|
await storeHealthMetricsV1({ server_name: 'server1' }, body);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle config with all doc types disabled', async () => {
|
test('should handle config with all doc types disabled', async () => {
|
||||||
@@ -191,6 +212,7 @@ describe('v1/health-metrics', () => {
|
|||||||
if (path.includes('tags')) return [];
|
if (path.includes('tags')) return [];
|
||||||
if (path.includes('includeFields')) return false;
|
if (path.includes('includeFields')) return false;
|
||||||
if (path.includes('enableAppNameExtract')) return false;
|
if (path.includes('enableAppNameExtract')) return false;
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
const body = {
|
const body = {
|
||||||
@@ -200,8 +222,10 @@ describe('v1/health-metrics', () => {
|
|||||||
session: { active: 5 },
|
session: { active: 5 },
|
||||||
users: { active: 3 },
|
users: { active: 3 },
|
||||||
cache: { hits: 100 },
|
cache: { hits: 100 },
|
||||||
|
version: '1.0.0',
|
||||||
|
started: '2024-01-01T00:00:00Z',
|
||||||
};
|
};
|
||||||
await storeHealthMetricsV1({}, body);
|
await storeHealthMetricsV1({ server_name: 'server1' }, body);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -22,6 +22,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV1: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -36,21 +37,25 @@ describe('v1/log-events', () => {
|
|||||||
const logEvents = await import('../v1/log-events.js');
|
const logEvents = await import('../v1/log-events.js');
|
||||||
storeLogEventV1 = logEvents.storeLogEventV1;
|
storeLogEventV1 = logEvents.storeLogEventV1;
|
||||||
globals.config.has.mockReturnValue(true);
|
globals.config.has.mockReturnValue(true);
|
||||||
globals.config.get.mockReturnValue([{ name: 'env', value: 'prod' }]);
|
globals.config.get.mockImplementation((path) => {
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
|
return [{ name: 'env', value: 'prod' }];
|
||||||
|
});
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeToInfluxWithRetry.mockResolvedValue();
|
||||||
|
utils.writeBatchToInfluxV1.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when InfluxDB disabled', async () => {
|
test('should return early when InfluxDB disabled', async () => {
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
await storeLogEventV1({ source: 'qseow-engine', host: 'server1' });
|
await storeLogEventV1({ source: 'qseow-engine', host: 'server1' });
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should warn for unsupported source', async () => {
|
test('should warn for unsupported source', async () => {
|
||||||
await storeLogEventV1({ source: 'unknown', host: 'server1' });
|
await storeLogEventV1({ source: 'unknown', host: 'server1' });
|
||||||
expect(globals.logger.warn).toHaveBeenCalledWith(expect.stringContaining('Unsupported'));
|
expect(globals.logger.warn).toHaveBeenCalledWith(expect.stringContaining('Unsupported'));
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write qseow-engine event', async () => {
|
test('should write qseow-engine event', async () => {
|
||||||
@@ -62,11 +67,11 @@ describe('v1/log-events', () => {
|
|||||||
subsystem: 'System',
|
subsystem: 'System',
|
||||||
message: 'test',
|
message: 'test',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
'Log event from qseow-engine',
|
'Log event from qseow-engine',
|
||||||
'v1',
|
'server1',
|
||||||
'server1'
|
100
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -79,7 +84,7 @@ describe('v1/log-events', () => {
|
|||||||
subsystem: 'Proxy',
|
subsystem: 'Proxy',
|
||||||
message: 'test',
|
message: 'test',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write qseow-scheduler event', async () => {
|
test('should write qseow-scheduler event', async () => {
|
||||||
@@ -91,7 +96,7 @@ describe('v1/log-events', () => {
|
|||||||
subsystem: 'Scheduler',
|
subsystem: 'Scheduler',
|
||||||
message: 'test',
|
message: 'test',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write qseow-repository event', async () => {
|
test('should write qseow-repository event', async () => {
|
||||||
@@ -103,7 +108,7 @@ describe('v1/log-events', () => {
|
|||||||
subsystem: 'Repository',
|
subsystem: 'Repository',
|
||||||
message: 'test',
|
message: 'test',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write qseow-qix-perf event', async () => {
|
test('should write qseow-qix-perf event', async () => {
|
||||||
@@ -115,11 +120,11 @@ describe('v1/log-events', () => {
|
|||||||
subsystem: 'Perf',
|
subsystem: 'Perf',
|
||||||
message: 'test',
|
message: 'test',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
|
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
|
||||||
await expect(
|
await expect(
|
||||||
storeLogEventV1({
|
storeLogEventV1({
|
||||||
source: 'qseow-engine',
|
source: 'qseow-engine',
|
||||||
@@ -146,7 +151,7 @@ describe('v1/log-events', () => {
|
|||||||
{ name: 'component', value: 'engine' },
|
{ name: 'component', value: 'engine' },
|
||||||
],
|
],
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should apply config tags when available', async () => {
|
test('should apply config tags when available', async () => {
|
||||||
@@ -163,7 +168,7 @@ describe('v1/log-events', () => {
|
|||||||
subsystem: 'Proxy',
|
subsystem: 'Proxy',
|
||||||
message: 'test',
|
message: 'test',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle events without categories', async () => {
|
test('should handle events without categories', async () => {
|
||||||
@@ -176,7 +181,7 @@ describe('v1/log-events', () => {
|
|||||||
message: 'test',
|
message: 'test',
|
||||||
category: [],
|
category: [],
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle engine event with all optional fields', async () => {
|
test('should handle engine event with all optional fields', async () => {
|
||||||
@@ -203,7 +208,7 @@ describe('v1/log-events', () => {
|
|||||||
context: 'DocSession',
|
context: 'DocSession',
|
||||||
session_id: 'sess-001',
|
session_id: 'sess-001',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle engine event without optional fields', async () => {
|
test('should handle engine event without optional fields', async () => {
|
||||||
@@ -219,7 +224,7 @@ describe('v1/log-events', () => {
|
|||||||
user_id: '',
|
user_id: '',
|
||||||
result_code: '',
|
result_code: '',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle proxy event with optional fields', async () => {
|
test('should handle proxy event with optional fields', async () => {
|
||||||
@@ -238,7 +243,7 @@ describe('v1/log-events', () => {
|
|||||||
origin: 'Proxy',
|
origin: 'Proxy',
|
||||||
context: 'AuthSession',
|
context: 'AuthSession',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle scheduler event with task fields', async () => {
|
test('should handle scheduler event with task fields', async () => {
|
||||||
@@ -258,7 +263,7 @@ describe('v1/log-events', () => {
|
|||||||
app_id: 'finance-001',
|
app_id: 'finance-001',
|
||||||
execution_id: 'exec-999',
|
execution_id: 'exec-999',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle repository event with optional fields', async () => {
|
test('should handle repository event with optional fields', async () => {
|
||||||
@@ -277,7 +282,7 @@ describe('v1/log-events', () => {
|
|||||||
origin: 'Repository',
|
origin: 'Repository',
|
||||||
context: 'API',
|
context: 'API',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle qix-perf event with all fields', async () => {
|
test('should handle qix-perf event with all fields', async () => {
|
||||||
@@ -301,7 +306,7 @@ describe('v1/log-events', () => {
|
|||||||
object_id: 'obj-123',
|
object_id: 'obj-123',
|
||||||
process_time: 150,
|
process_time: 150,
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle qix-perf event with missing optional fields', async () => {
|
test('should handle qix-perf event with missing optional fields', async () => {
|
||||||
@@ -324,6 +329,6 @@ describe('v1/log-events', () => {
|
|||||||
app_name: '',
|
app_name: '',
|
||||||
object_id: '',
|
object_id: '',
|
||||||
});
|
});
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -43,6 +43,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV1: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -110,16 +111,17 @@ describe('v1/queue-metrics', () => {
|
|||||||
if (path.includes('measurementName')) return 'queue_metrics';
|
if (path.includes('measurementName')) return 'queue_metrics';
|
||||||
if (path.includes('queueMetrics.influxdb.tags'))
|
if (path.includes('queueMetrics.influxdb.tags'))
|
||||||
return [{ name: 'env', value: 'prod' }];
|
return [{ name: 'env', value: 'prod' }];
|
||||||
|
if (path === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeBatchToInfluxV1.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when InfluxDB disabled for user events', async () => {
|
test('should return early when InfluxDB disabled for user events', async () => {
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
await storeUserEventQueueMetricsV1();
|
await storeUserEventQueueMetricsV1();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when config disabled', async () => {
|
test('should return early when config disabled', async () => {
|
||||||
@@ -128,7 +130,7 @@ describe('v1/queue-metrics', () => {
|
|||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
await storeUserEventQueueMetricsV1();
|
await storeUserEventQueueMetricsV1();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when queue manager not initialized', async () => {
|
test('should return early when queue manager not initialized', async () => {
|
||||||
@@ -137,21 +139,22 @@ describe('v1/queue-metrics', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('not initialized')
|
expect.stringContaining('not initialized')
|
||||||
);
|
);
|
||||||
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write user event queue metrics', async () => {
|
test('should write user event queue metrics', async () => {
|
||||||
await storeUserEventQueueMetricsV1();
|
await storeUserEventQueueMetricsV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
expect.stringContaining('User event queue metrics'),
|
expect.stringContaining('User event queue metrics'),
|
||||||
'v1',
|
'',
|
||||||
''
|
100
|
||||||
);
|
);
|
||||||
expect(globals.udpQueueManagerUserActivity.clearMetrics).toHaveBeenCalled();
|
expect(globals.udpQueueManagerUserActivity.clearMetrics).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle user event write errors', async () => {
|
test('should handle user event write errors', async () => {
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
|
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
|
||||||
await expect(storeUserEventQueueMetricsV1()).rejects.toThrow();
|
await expect(storeUserEventQueueMetricsV1()).rejects.toThrow();
|
||||||
expect(globals.logger.error).toHaveBeenCalled();
|
expect(globals.logger.error).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
@@ -159,7 +162,7 @@ describe('v1/queue-metrics', () => {
|
|||||||
test('should return early when InfluxDB disabled for log events', async () => {
|
test('should return early when InfluxDB disabled for log events', async () => {
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
await storeLogEventQueueMetricsV1();
|
await storeLogEventQueueMetricsV1();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when config disabled for log events', async () => {
|
test('should return early when config disabled for log events', async () => {
|
||||||
@@ -168,7 +171,7 @@ describe('v1/queue-metrics', () => {
|
|||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
await storeLogEventQueueMetricsV1();
|
await storeLogEventQueueMetricsV1();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when log queue manager not initialized', async () => {
|
test('should return early when log queue manager not initialized', async () => {
|
||||||
@@ -177,21 +180,22 @@ describe('v1/queue-metrics', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('not initialized')
|
expect.stringContaining('not initialized')
|
||||||
);
|
);
|
||||||
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write log event queue metrics', async () => {
|
test('should write log event queue metrics', async () => {
|
||||||
await storeLogEventQueueMetricsV1();
|
await storeLogEventQueueMetricsV1();
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
expect.stringContaining('Log event queue metrics'),
|
expect.stringContaining('Log event queue metrics'),
|
||||||
'v1',
|
'',
|
||||||
''
|
100
|
||||||
);
|
);
|
||||||
expect(globals.udpQueueManagerLogEvents.clearMetrics).toHaveBeenCalled();
|
expect(globals.udpQueueManagerLogEvents.clearMetrics).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle log event write errors', async () => {
|
test('should handle log event write errors', async () => {
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
|
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
|
||||||
await expect(storeLogEventQueueMetricsV1()).rejects.toThrow();
|
await expect(storeLogEventQueueMetricsV1()).rejects.toThrow();
|
||||||
expect(globals.logger.error).toHaveBeenCalled();
|
expect(globals.logger.error).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV1: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -51,6 +52,11 @@ describe('v1/sessions', () => {
|
|||||||
// Setup default mocks
|
// Setup default mocks
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeToInfluxWithRetry.mockResolvedValue();
|
||||||
|
utils.writeBatchToInfluxV1.mockResolvedValue();
|
||||||
|
globals.config.get.mockImplementation((path) => {
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
|
return undefined;
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('storeSessionsV1', () => {
|
describe('storeSessionsV1', () => {
|
||||||
@@ -68,7 +74,7 @@ describe('v1/sessions', () => {
|
|||||||
|
|
||||||
await storeSessionsV1(userSessions);
|
await storeSessionsV1(userSessions);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when no datapoints', async () => {
|
test('should return early when no datapoints', async () => {
|
||||||
@@ -86,7 +92,7 @@ describe('v1/sessions', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
'PROXY SESSIONS V1: No datapoints to write to InfluxDB'
|
'PROXY SESSIONS V1: No datapoints to write to InfluxDB'
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write session data', async () => {
|
test('should successfully write session data', async () => {
|
||||||
@@ -112,11 +118,11 @@ describe('v1/sessions', () => {
|
|||||||
|
|
||||||
await storeSessionsV1(userSessions);
|
await storeSessionsV1(userSessions);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
'Proxy sessions for server1/vp1',
|
'Proxy sessions for server1/vp1',
|
||||||
'v1',
|
'central',
|
||||||
'central'
|
100
|
||||||
);
|
);
|
||||||
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Sent user session data to InfluxDB')
|
expect.stringContaining('Sent user session data to InfluxDB')
|
||||||
@@ -146,13 +152,14 @@ describe('v1/sessions', () => {
|
|||||||
datapointInfluxdb: datapoints,
|
datapointInfluxdb: datapoints,
|
||||||
};
|
};
|
||||||
|
|
||||||
utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
|
|
||||||
await writeFn();
|
|
||||||
});
|
|
||||||
|
|
||||||
await storeSessionsV1(userSessions);
|
await storeSessionsV1(userSessions);
|
||||||
|
|
||||||
expect(globals.influx.writePoints).toHaveBeenCalledWith(datapoints);
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
|
datapoints,
|
||||||
|
expect.any(String),
|
||||||
|
'central',
|
||||||
|
100
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
@@ -166,7 +173,7 @@ describe('v1/sessions', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV1.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await expect(storeSessionsV1(userSessions)).rejects.toThrow('Write failed');
|
await expect(storeSessionsV1(userSessions)).rejects.toThrow('Write failed');
|
||||||
|
|
||||||
|
|||||||
@@ -32,6 +32,7 @@ const mockUtils = {
|
|||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
getConfigTags: jest.fn(),
|
getConfigTags: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV1: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -51,9 +52,13 @@ describe('v1/user-events', () => {
|
|||||||
|
|
||||||
// Setup default mocks
|
// Setup default mocks
|
||||||
globals.config.has.mockReturnValue(true);
|
globals.config.has.mockReturnValue(true);
|
||||||
globals.config.get.mockReturnValue([{ name: 'env', value: 'prod' }]);
|
globals.config.get.mockImplementation((key) => {
|
||||||
|
if (key === 'Butler-SOS.userEvents.tags') return [{ name: 'env', value: 'prod' }];
|
||||||
|
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
|
||||||
|
return null;
|
||||||
|
});
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeBatchToInfluxV1.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('storeUserEventV1', () => {
|
describe('storeUserEventV1', () => {
|
||||||
@@ -70,7 +75,7 @@ describe('v1/user-events', () => {
|
|||||||
|
|
||||||
await storeUserEventV1(msg);
|
await storeUserEventV1(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write user event', async () => {
|
test('should successfully write user event', async () => {
|
||||||
@@ -84,11 +89,11 @@ describe('v1/user-events', () => {
|
|||||||
|
|
||||||
await storeUserEventV1(msg);
|
await storeUserEventV1(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
'User event',
|
'User event',
|
||||||
'v1',
|
'server1',
|
||||||
'server1'
|
100
|
||||||
);
|
);
|
||||||
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
||||||
'USER EVENT V1: Sent user event data to InfluxDB'
|
'USER EVENT V1: Sent user event data to InfluxDB'
|
||||||
@@ -108,7 +113,7 @@ describe('v1/user-events', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Missing required field')
|
expect.stringContaining('Missing required field')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should validate required fields - missing command', async () => {
|
test('should validate required fields - missing command', async () => {
|
||||||
@@ -124,7 +129,7 @@ describe('v1/user-events', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Missing required field')
|
expect.stringContaining('Missing required field')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should validate required fields - missing user_directory', async () => {
|
test('should validate required fields - missing user_directory', async () => {
|
||||||
@@ -140,7 +145,7 @@ describe('v1/user-events', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Missing required field')
|
expect.stringContaining('Missing required field')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should validate required fields - missing user_id', async () => {
|
test('should validate required fields - missing user_id', async () => {
|
||||||
@@ -156,7 +161,7 @@ describe('v1/user-events', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Missing required field')
|
expect.stringContaining('Missing required field')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should validate required fields - missing origin', async () => {
|
test('should validate required fields - missing origin', async () => {
|
||||||
@@ -172,7 +177,7 @@ describe('v1/user-events', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Missing required field')
|
expect.stringContaining('Missing required field')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should create correct datapoint with config tags', async () => {
|
test('should create correct datapoint with config tags', async () => {
|
||||||
@@ -184,10 +189,6 @@ describe('v1/user-events', () => {
|
|||||||
origin: 'AppAccess',
|
origin: 'AppAccess',
|
||||||
};
|
};
|
||||||
|
|
||||||
utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
|
|
||||||
await writeFn();
|
|
||||||
});
|
|
||||||
|
|
||||||
await storeUserEventV1(msg);
|
await storeUserEventV1(msg);
|
||||||
|
|
||||||
const expectedDatapoint = expect.arrayContaining([
|
const expectedDatapoint = expect.arrayContaining([
|
||||||
@@ -209,7 +210,12 @@ describe('v1/user-events', () => {
|
|||||||
}),
|
}),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
expect(globals.influx.writePoints).toHaveBeenCalledWith(expectedDatapoint);
|
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
|
||||||
|
expectedDatapoint,
|
||||||
|
'User event',
|
||||||
|
'server1',
|
||||||
|
100
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
@@ -222,7 +228,7 @@ describe('v1/user-events', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV1.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await expect(storeUserEventV1(msg)).rejects.toThrow('Write failed');
|
await expect(storeUserEventV1(msg)).rejects.toThrow('Write failed');
|
||||||
|
|
||||||
|
|||||||
@@ -34,6 +34,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV2: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -56,11 +57,11 @@ describe('v2/butler-memory', () => {
|
|||||||
globals.config.get.mockImplementation((path) => {
|
globals.config.get.mockImplementation((path) => {
|
||||||
if (path.includes('org')) return 'test-org';
|
if (path.includes('org')) return 'test-org';
|
||||||
if (path.includes('bucket')) return 'test-bucket';
|
if (path.includes('bucket')) return 'test-bucket';
|
||||||
|
if (path.includes('maxBatchSize')) return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
|
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockImplementation(async (fn) => await fn());
|
|
||||||
mockWriteApi.writePoint.mockResolvedValue(undefined);
|
mockWriteApi.writePoint.mockResolvedValue(undefined);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -74,12 +75,12 @@ describe('v2/butler-memory', () => {
|
|||||||
processMemoryMByte: 250,
|
processMemoryMByte: 250,
|
||||||
};
|
};
|
||||||
await storeButlerMemoryV2(memory);
|
await storeButlerMemoryV2(memory);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early with invalid memory data', async () => {
|
test('should return early with invalid memory data', async () => {
|
||||||
await storeButlerMemoryV2(null);
|
await storeButlerMemoryV2(null);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
'MEMORY USAGE V2: Invalid memory data provided'
|
'MEMORY USAGE V2: Invalid memory data provided'
|
||||||
);
|
);
|
||||||
@@ -87,7 +88,7 @@ describe('v2/butler-memory', () => {
|
|||||||
|
|
||||||
test('should return early with non-object memory data', async () => {
|
test('should return early with non-object memory data', async () => {
|
||||||
await storeButlerMemoryV2('not an object');
|
await storeButlerMemoryV2('not an object');
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
expect(globals.logger.warn).toHaveBeenCalled();
|
expect(globals.logger.warn).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -109,9 +110,14 @@ describe('v2/butler-memory', () => {
|
|||||||
expect(mockPoint.floatField).toHaveBeenCalledWith('heap_total', 300.2);
|
expect(mockPoint.floatField).toHaveBeenCalledWith('heap_total', 300.2);
|
||||||
expect(mockPoint.floatField).toHaveBeenCalledWith('external', 75.8);
|
expect(mockPoint.floatField).toHaveBeenCalledWith('external', 75.8);
|
||||||
expect(mockPoint.floatField).toHaveBeenCalledWith('process_memory', 400.1);
|
expect(mockPoint.floatField).toHaveBeenCalledWith('process_memory', 400.1);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
|
||||||
expect(mockWriteApi.writePoint).toHaveBeenCalled();
|
[mockPoint],
|
||||||
expect(mockWriteApi.close).toHaveBeenCalled();
|
'test-org',
|
||||||
|
'test-bucket',
|
||||||
|
'Memory usage metrics',
|
||||||
|
'',
|
||||||
|
100
|
||||||
|
);
|
||||||
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
||||||
'MEMORY USAGE V2: Sent Butler SOS memory usage data to InfluxDB'
|
'MEMORY USAGE V2: Sent Butler SOS memory usage data to InfluxDB'
|
||||||
);
|
);
|
||||||
@@ -129,7 +135,7 @@ describe('v2/butler-memory', () => {
|
|||||||
await storeButlerMemoryV2(memory);
|
await storeButlerMemoryV2(memory);
|
||||||
|
|
||||||
expect(mockPoint.floatField).toHaveBeenCalledWith('heap_used', 0);
|
expect(mockPoint.floatField).toHaveBeenCalledWith('heap_used', 0);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should log silly level debug info', async () => {
|
test('should log silly level debug info', async () => {
|
||||||
|
|||||||
@@ -51,6 +51,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV2: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -129,9 +130,7 @@ describe('v2/event-counts', () => {
|
|||||||
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 200);
|
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 200);
|
||||||
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 100);
|
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 100);
|
||||||
expect(mockV2Utils.applyInfluxTags).toHaveBeenCalledTimes(2);
|
expect(mockV2Utils.applyInfluxTags).toHaveBeenCalledTimes(2);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
expect(mockWriteApi.writePoints).toHaveBeenCalled();
|
|
||||||
expect(mockWriteApi.close).toHaveBeenCalled();
|
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle zero counts', async () => {
|
test('should handle zero counts', async () => {
|
||||||
@@ -184,7 +183,7 @@ describe('v2/event-counts', () => {
|
|||||||
expect(mockPoint.tag).toHaveBeenCalledWith('source', 'qseow-proxy');
|
expect(mockPoint.tag).toHaveBeenCalledWith('source', 'qseow-proxy');
|
||||||
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 5);
|
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 5);
|
||||||
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 3);
|
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 3);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle empty rejection tags', async () => {
|
test('should handle empty rejection tags', async () => {
|
||||||
|
|||||||
@@ -38,6 +38,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV2: jest.fn(),
|
||||||
processAppDocuments: jest.fn(),
|
processAppDocuments: jest.fn(),
|
||||||
getFormattedTime: jest.fn(() => '2 days, 3 hours'),
|
getFormattedTime: jest.fn(() => '2 days, 3 hours'),
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -35,6 +35,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV2: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -99,8 +100,8 @@ describe('v2/log-events', () => {
|
|||||||
};
|
};
|
||||||
await storeLogEventV2(msg);
|
await storeLogEventV2(msg);
|
||||||
// Implementation doesn't explicitly validate required fields, it just processes what's there
|
// Implementation doesn't explicitly validate required fields, it just processes what's there
|
||||||
// So this test will actually call writeToInfluxWithRetry
|
// So this test will actually call writeBatchToInfluxV2
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early with unsupported source', async () => {
|
test('should return early with unsupported source', async () => {
|
||||||
@@ -114,7 +115,7 @@ describe('v2/log-events', () => {
|
|||||||
};
|
};
|
||||||
await storeLogEventV2(msg);
|
await storeLogEventV2(msg);
|
||||||
expect(globals.logger.warn).toHaveBeenCalled();
|
expect(globals.logger.warn).toHaveBeenCalled();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write engine log event', async () => {
|
test('should write engine log event', async () => {
|
||||||
@@ -167,7 +168,7 @@ describe('v2/log-events', () => {
|
|||||||
expect(mockPoint.stringField).toHaveBeenCalledWith('context', 'Init');
|
expect(mockPoint.stringField).toHaveBeenCalledWith('context', 'Init');
|
||||||
expect(mockPoint.stringField).toHaveBeenCalledWith('session_id', '');
|
expect(mockPoint.stringField).toHaveBeenCalledWith('session_id', '');
|
||||||
expect(mockPoint.stringField).toHaveBeenCalledWith('raw_event', expect.any(String));
|
expect(mockPoint.stringField).toHaveBeenCalledWith('raw_event', expect.any(String));
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write proxy log event', async () => {
|
test('should write proxy log event', async () => {
|
||||||
@@ -194,7 +195,7 @@ describe('v2/log-events', () => {
|
|||||||
expect(mockPoint.tag).toHaveBeenCalledWith('result_code', '403');
|
expect(mockPoint.tag).toHaveBeenCalledWith('result_code', '403');
|
||||||
expect(mockPoint.stringField).toHaveBeenCalledWith('command', 'Login');
|
expect(mockPoint.stringField).toHaveBeenCalledWith('command', 'Login');
|
||||||
expect(mockPoint.stringField).toHaveBeenCalledWith('result_code_field', '403');
|
expect(mockPoint.stringField).toHaveBeenCalledWith('result_code_field', '403');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write repository log event', async () => {
|
test('should write repository log event', async () => {
|
||||||
@@ -216,7 +217,7 @@ describe('v2/log-events', () => {
|
|||||||
'exception_message',
|
'exception_message',
|
||||||
'Connection timeout'
|
'Connection timeout'
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should write scheduler log event', async () => {
|
test('should write scheduler log event', async () => {
|
||||||
@@ -237,7 +238,7 @@ describe('v2/log-events', () => {
|
|||||||
expect(mockPoint.tag).toHaveBeenCalledWith('level', 'INFO');
|
expect(mockPoint.tag).toHaveBeenCalledWith('level', 'INFO');
|
||||||
expect(mockPoint.tag).toHaveBeenCalledWith('task_id', 'sched-task-001');
|
expect(mockPoint.tag).toHaveBeenCalledWith('task_id', 'sched-task-001');
|
||||||
expect(mockPoint.tag).toHaveBeenCalledWith('task_name', 'Daily Reload');
|
expect(mockPoint.tag).toHaveBeenCalledWith('task_name', 'Daily Reload');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle log event with minimal fields', async () => {
|
test('should handle log event with minimal fields', async () => {
|
||||||
@@ -256,7 +257,7 @@ describe('v2/log-events', () => {
|
|||||||
expect(mockPoint.tag).toHaveBeenCalledWith('source', 'qseow-engine');
|
expect(mockPoint.tag).toHaveBeenCalledWith('source', 'qseow-engine');
|
||||||
expect(mockPoint.tag).toHaveBeenCalledWith('level', 'DEBUG');
|
expect(mockPoint.tag).toHaveBeenCalledWith('level', 'DEBUG');
|
||||||
expect(mockPoint.stringField).toHaveBeenCalledWith('message', 'Debug message');
|
expect(mockPoint.stringField).toHaveBeenCalledWith('message', 'Debug message');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle empty string fields', async () => {
|
test('should handle empty string fields', async () => {
|
||||||
@@ -274,7 +275,7 @@ describe('v2/log-events', () => {
|
|||||||
|
|
||||||
await storeLogEventV2(msg);
|
await storeLogEventV2(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should apply config tags', async () => {
|
test('should apply config tags', async () => {
|
||||||
|
|||||||
@@ -42,6 +42,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV2: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -76,6 +77,7 @@ describe('v2/queue-metrics', () => {
|
|||||||
if (path.includes('queueMetrics.influxdb.tags'))
|
if (path.includes('queueMetrics.influxdb.tags'))
|
||||||
return [{ name: 'env', value: 'prod' }];
|
return [{ name: 'env', value: 'prod' }];
|
||||||
if (path.includes('enable')) return true;
|
if (path.includes('enable')) return true;
|
||||||
|
if (path === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
|
||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
globals.config.has.mockReturnValue(true);
|
globals.config.has.mockReturnValue(true);
|
||||||
@@ -84,7 +86,7 @@ describe('v2/queue-metrics', () => {
|
|||||||
globals.udpQueueManagerLogEvents = mockQueueManager;
|
globals.udpQueueManagerLogEvents = mockQueueManager;
|
||||||
|
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockImplementation(async (cb) => await cb());
|
utils.writeBatchToInfluxV2.mockResolvedValue();
|
||||||
|
|
||||||
mockWriteApi.writePoint.mockResolvedValue(undefined);
|
mockWriteApi.writePoint.mockResolvedValue(undefined);
|
||||||
mockWriteApi.close.mockResolvedValue(undefined);
|
mockWriteApi.close.mockResolvedValue(undefined);
|
||||||
@@ -114,7 +116,7 @@ describe('v2/queue-metrics', () => {
|
|||||||
test('should return early when InfluxDB disabled', async () => {
|
test('should return early when InfluxDB disabled', async () => {
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
await storeUserEventQueueMetricsV2();
|
await storeUserEventQueueMetricsV2();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when feature disabled', async () => {
|
test('should return early when feature disabled', async () => {
|
||||||
@@ -123,13 +125,13 @@ describe('v2/queue-metrics', () => {
|
|||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
await storeUserEventQueueMetricsV2();
|
await storeUserEventQueueMetricsV2();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when queue manager not initialized', async () => {
|
test('should return early when queue manager not initialized', async () => {
|
||||||
globals.udpQueueManagerUserActivity = null;
|
globals.udpQueueManagerUserActivity = null;
|
||||||
await storeUserEventQueueMetricsV2();
|
await storeUserEventQueueMetricsV2();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
'USER EVENT QUEUE METRICS V2: Queue manager not initialized'
|
'USER EVENT QUEUE METRICS V2: Queue manager not initialized'
|
||||||
);
|
);
|
||||||
@@ -161,9 +163,14 @@ describe('v2/queue-metrics', () => {
|
|||||||
expect(mockV2Utils.applyInfluxTags).toHaveBeenCalledWith(mockPoint, [
|
expect(mockV2Utils.applyInfluxTags).toHaveBeenCalledWith(mockPoint, [
|
||||||
{ name: 'env', value: 'prod' },
|
{ name: 'env', value: 'prod' },
|
||||||
]);
|
]);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
|
||||||
expect(mockWriteApi.writePoint).toHaveBeenCalledWith(mockPoint);
|
[mockPoint],
|
||||||
expect(mockWriteApi.close).toHaveBeenCalled();
|
'test-org',
|
||||||
|
'test-bucket',
|
||||||
|
'User event queue metrics',
|
||||||
|
'user-events-queue',
|
||||||
|
100
|
||||||
|
);
|
||||||
expect(mockQueueManager.clearMetrics).toHaveBeenCalled();
|
expect(mockQueueManager.clearMetrics).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -191,7 +198,14 @@ describe('v2/queue-metrics', () => {
|
|||||||
await storeUserEventQueueMetricsV2();
|
await storeUserEventQueueMetricsV2();
|
||||||
|
|
||||||
expect(mockPoint.intField).toHaveBeenCalledWith('queue_size', 0);
|
expect(mockPoint.intField).toHaveBeenCalledWith('queue_size', 0);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
|
||||||
|
[mockPoint],
|
||||||
|
'test-org',
|
||||||
|
'test-bucket',
|
||||||
|
'User event queue metrics',
|
||||||
|
'user-events-queue',
|
||||||
|
100
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should log verbose information', async () => {
|
test('should log verbose information', async () => {
|
||||||
@@ -207,7 +221,7 @@ describe('v2/queue-metrics', () => {
|
|||||||
test('should return early when InfluxDB disabled', async () => {
|
test('should return early when InfluxDB disabled', async () => {
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(false);
|
utils.isInfluxDbEnabled.mockReturnValue(false);
|
||||||
await storeLogEventQueueMetricsV2();
|
await storeLogEventQueueMetricsV2();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when feature disabled', async () => {
|
test('should return early when feature disabled', async () => {
|
||||||
@@ -216,13 +230,13 @@ describe('v2/queue-metrics', () => {
|
|||||||
return undefined;
|
return undefined;
|
||||||
});
|
});
|
||||||
await storeLogEventQueueMetricsV2();
|
await storeLogEventQueueMetricsV2();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when queue manager not initialized', async () => {
|
test('should return early when queue manager not initialized', async () => {
|
||||||
globals.udpQueueManagerLogEvents = null;
|
globals.udpQueueManagerLogEvents = null;
|
||||||
await storeLogEventQueueMetricsV2();
|
await storeLogEventQueueMetricsV2();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
|
||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
'LOG EVENT QUEUE METRICS V2: Queue manager not initialized'
|
'LOG EVENT QUEUE METRICS V2: Queue manager not initialized'
|
||||||
);
|
);
|
||||||
@@ -235,7 +249,14 @@ describe('v2/queue-metrics', () => {
|
|||||||
expect(mockPoint.tag).toHaveBeenCalledWith('queue_type', 'log_events');
|
expect(mockPoint.tag).toHaveBeenCalledWith('queue_type', 'log_events');
|
||||||
expect(mockPoint.tag).toHaveBeenCalledWith('host', 'test-host');
|
expect(mockPoint.tag).toHaveBeenCalledWith('host', 'test-host');
|
||||||
expect(mockPoint.intField).toHaveBeenCalledWith('queue_size', 100);
|
expect(mockPoint.intField).toHaveBeenCalledWith('queue_size', 100);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
|
||||||
|
[mockPoint],
|
||||||
|
'test-org',
|
||||||
|
'test-bucket',
|
||||||
|
'Log event queue metrics',
|
||||||
|
'log-events-queue',
|
||||||
|
100
|
||||||
|
);
|
||||||
expect(mockQueueManager.clearMetrics).toHaveBeenCalled();
|
expect(mockQueueManager.clearMetrics).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -264,7 +285,14 @@ describe('v2/queue-metrics', () => {
|
|||||||
|
|
||||||
expect(mockPoint.floatField).toHaveBeenCalledWith('queue_utilization_pct', 95.0);
|
expect(mockPoint.floatField).toHaveBeenCalledWith('queue_utilization_pct', 95.0);
|
||||||
expect(mockPoint.intField).toHaveBeenCalledWith('backpressure_active', 1);
|
expect(mockPoint.intField).toHaveBeenCalledWith('backpressure_active', 1);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
|
||||||
|
[mockPoint],
|
||||||
|
'test-org',
|
||||||
|
'test-bucket',
|
||||||
|
'Log event queue metrics',
|
||||||
|
'log-events-queue',
|
||||||
|
100
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should log verbose information', async () => {
|
test('should log verbose information', async () => {
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV2: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV2: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV3: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -61,7 +62,7 @@ describe('v3/butler-memory', () => {
|
|||||||
// Setup default mocks
|
// Setup default mocks
|
||||||
globals.config.get.mockReturnValue('test-db');
|
globals.config.get.mockReturnValue('test-db');
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeBatchToInfluxV3.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('postButlerSOSMemoryUsageToInfluxdbV3', () => {
|
describe('postButlerSOSMemoryUsageToInfluxdbV3', () => {
|
||||||
@@ -78,7 +79,7 @@ describe('v3/butler-memory', () => {
|
|||||||
|
|
||||||
await postButlerSOSMemoryUsageToInfluxdbV3(memory);
|
await postButlerSOSMemoryUsageToInfluxdbV3(memory);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write memory usage metrics', async () => {
|
test('should successfully write memory usage metrics', async () => {
|
||||||
@@ -98,7 +99,7 @@ describe('v3/butler-memory', () => {
|
|||||||
expect(mockPoint.setFloatField).toHaveBeenCalledWith('heap_total', 200.75);
|
expect(mockPoint.setFloatField).toHaveBeenCalledWith('heap_total', 200.75);
|
||||||
expect(mockPoint.setFloatField).toHaveBeenCalledWith('external', 50.25);
|
expect(mockPoint.setFloatField).toHaveBeenCalledWith('external', 50.25);
|
||||||
expect(mockPoint.setFloatField).toHaveBeenCalledWith('process_memory', 250.5);
|
expect(mockPoint.setFloatField).toHaveBeenCalledWith('process_memory', 250.5);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
@@ -111,7 +112,7 @@ describe('v3/butler-memory', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await postButlerSOSMemoryUsageToInfluxdbV3(memory);
|
await postButlerSOSMemoryUsageToInfluxdbV3(memory);
|
||||||
|
|
||||||
|
|||||||
@@ -40,6 +40,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV3: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -78,11 +79,13 @@ describe('v3/event-counts', () => {
|
|||||||
return 'event_count';
|
return 'event_count';
|
||||||
if (key === 'Butler-SOS.qlikSenseEvents.rejectedEventCount.influxdb.measurementName')
|
if (key === 'Butler-SOS.qlikSenseEvents.rejectedEventCount.influxdb.measurementName')
|
||||||
return 'rejected_event_count';
|
return 'rejected_event_count';
|
||||||
|
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
|
||||||
return null;
|
return null;
|
||||||
});
|
});
|
||||||
globals.config.has.mockReturnValue(false);
|
globals.config.has.mockReturnValue(false);
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeToInfluxWithRetry.mockResolvedValue();
|
||||||
|
utils.writeBatchToInfluxV3.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('storeEventCountInfluxDBV3', () => {
|
describe('storeEventCountInfluxDBV3', () => {
|
||||||
@@ -128,7 +131,7 @@ describe('v3/event-counts', () => {
|
|||||||
|
|
||||||
await storeEventCountInfluxDBV3();
|
await storeEventCountInfluxDBV3();
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(2);
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('event_type', 'log');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('event_type', 'log');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-engine');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-engine');
|
||||||
expect(mockPoint.setIntegerField).toHaveBeenCalledWith('counter', 10);
|
expect(mockPoint.setIntegerField).toHaveBeenCalledWith('counter', 10);
|
||||||
@@ -148,7 +151,7 @@ describe('v3/event-counts', () => {
|
|||||||
|
|
||||||
await storeEventCountInfluxDBV3();
|
await storeEventCountInfluxDBV3();
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(1);
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('event_type', 'user');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('event_type', 'user');
|
||||||
expect(mockPoint.setIntegerField).toHaveBeenCalledWith('counter', 15);
|
expect(mockPoint.setIntegerField).toHaveBeenCalledWith('counter', 15);
|
||||||
});
|
});
|
||||||
@@ -166,7 +169,7 @@ describe('v3/event-counts', () => {
|
|||||||
|
|
||||||
await storeEventCountInfluxDBV3();
|
await storeEventCountInfluxDBV3();
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(2);
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should apply config tags when available', async () => {
|
test('should apply config tags when available', async () => {
|
||||||
@@ -199,7 +202,7 @@ describe('v3/event-counts', () => {
|
|||||||
globals.udpEvents.getUserEvents.mockResolvedValue([]);
|
globals.udpEvents.getUserEvents.mockResolvedValue([]);
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await storeEventCountInfluxDBV3();
|
await storeEventCountInfluxDBV3();
|
||||||
|
|
||||||
@@ -218,7 +221,7 @@ describe('v3/event-counts', () => {
|
|||||||
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('No events to store')
|
expect.stringContaining('No events to store')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should return early when InfluxDB is disabled', async () => {
|
test('should return early when InfluxDB is disabled', async () => {
|
||||||
@@ -227,7 +230,7 @@ describe('v3/event-counts', () => {
|
|||||||
|
|
||||||
await storeRejectedEventCountInfluxDBV3();
|
await storeRejectedEventCountInfluxDBV3();
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should store rejected log events successfully', async () => {
|
test('should store rejected log events successfully', async () => {
|
||||||
@@ -248,7 +251,7 @@ describe('v3/event-counts', () => {
|
|||||||
await storeRejectedEventCountInfluxDBV3();
|
await storeRejectedEventCountInfluxDBV3();
|
||||||
|
|
||||||
// Should have written the rejected event
|
// Should have written the rejected event
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
expect(globals.logger.debug).toHaveBeenCalledWith(
|
expect(globals.logger.debug).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Wrote data to InfluxDB v3')
|
expect.stringContaining('Wrote data to InfluxDB v3')
|
||||||
);
|
);
|
||||||
@@ -266,7 +269,7 @@ describe('v3/event-counts', () => {
|
|||||||
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue(logEvents);
|
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue(logEvents);
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await storeRejectedEventCountInfluxDBV3();
|
await storeRejectedEventCountInfluxDBV3();
|
||||||
|
|
||||||
|
|||||||
@@ -34,6 +34,7 @@ const mockUtils = {
|
|||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
applyTagsToPoint3: jest.fn(),
|
applyTagsToPoint3: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV3: jest.fn(),
|
||||||
validateUnsignedField: jest.fn((value) =>
|
validateUnsignedField: jest.fn((value) =>
|
||||||
typeof value === 'number' && value >= 0 ? value : 0
|
typeof value === 'number' && value >= 0 ? value : 0
|
||||||
),
|
),
|
||||||
@@ -80,6 +81,7 @@ describe('v3/health-metrics', () => {
|
|||||||
if (key === 'Butler-SOS.influxdbConfig.includeFields.loadedDocs') return true;
|
if (key === 'Butler-SOS.influxdbConfig.includeFields.loadedDocs') return true;
|
||||||
if (key === 'Butler-SOS.influxdbConfig.includeFields.inMemoryDocs') return true;
|
if (key === 'Butler-SOS.influxdbConfig.includeFields.inMemoryDocs') return true;
|
||||||
if (key === 'Butler-SOS.appNames.enableAppNameExtract') return true;
|
if (key === 'Butler-SOS.appNames.enableAppNameExtract') return true;
|
||||||
|
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
|
||||||
return false;
|
return false;
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -89,7 +91,7 @@ describe('v3/health-metrics', () => {
|
|||||||
sessionAppNames: ['SessionApp1'],
|
sessionAppNames: ['SessionApp1'],
|
||||||
});
|
});
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeBatchToInfluxV3.mockResolvedValue();
|
||||||
utils.applyTagsToPoint3.mockImplementation(() => {});
|
utils.applyTagsToPoint3.mockImplementation(() => {});
|
||||||
|
|
||||||
// Setup influxWriteApi
|
// Setup influxWriteApi
|
||||||
@@ -148,7 +150,7 @@ describe('v3/health-metrics', () => {
|
|||||||
|
|
||||||
await postHealthMetricsToInfluxdbV3('test-server', 'test-host', body, {});
|
await postHealthMetricsToInfluxdbV3('test-server', 'test-host', body, {});
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should warn and return when influxWriteApi is not initialized', async () => {
|
test('should warn and return when influxWriteApi is not initialized', async () => {
|
||||||
@@ -160,7 +162,7 @@ describe('v3/health-metrics', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Influxdb write API object not initialized')
|
expect.stringContaining('Influxdb write API object not initialized')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should warn and return when writeApi not found for server', async () => {
|
test('should warn and return when writeApi not found for server', async () => {
|
||||||
@@ -171,7 +173,7 @@ describe('v3/health-metrics', () => {
|
|||||||
expect(globals.logger.warn).toHaveBeenCalledWith(
|
expect(globals.logger.warn).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Influxdb write API object not found for host test-host')
|
expect.stringContaining('Influxdb write API object not found for host test-host')
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should process and write all health metrics successfully', async () => {
|
test('should process and write all health metrics successfully', async () => {
|
||||||
@@ -201,8 +203,15 @@ describe('v3/health-metrics', () => {
|
|||||||
// Should apply tags to all 8 points
|
// Should apply tags to all 8 points
|
||||||
expect(utils.applyTagsToPoint3).toHaveBeenCalledTimes(8);
|
expect(utils.applyTagsToPoint3).toHaveBeenCalledTimes(8);
|
||||||
|
|
||||||
// Should write all 8 measurements
|
// Should write all 8 measurements in one batch
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(8);
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
|
||||||
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
|
||||||
|
expect.any(Array),
|
||||||
|
'test-db',
|
||||||
|
expect.stringContaining('Health metrics for'),
|
||||||
|
'health-metrics',
|
||||||
|
100
|
||||||
|
);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should call getFormattedTime with started timestamp', async () => {
|
test('should call getFormattedTime with started timestamp', async () => {
|
||||||
@@ -231,7 +240,7 @@ describe('v3/health-metrics', () => {
|
|||||||
test('should handle write errors with error tracking', async () => {
|
test('should handle write errors with error tracking', async () => {
|
||||||
const body = createMockBody();
|
const body = createMockBody();
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await postHealthMetricsToInfluxdbV3('test-server', 'test-host', body, {});
|
await postHealthMetricsToInfluxdbV3('test-server', 'test-host', body, {});
|
||||||
|
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV3: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -110,7 +111,7 @@ describe('v3/log-events', () => {
|
|||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-engine');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-engine');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('level', 'INFO');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('level', 'INFO');
|
||||||
expect(mockPoint.setStringField).toHaveBeenCalledWith('message', 'Test message');
|
expect(mockPoint.setStringField).toHaveBeenCalledWith('message', 'Test message');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write qseow-proxy log event', async () => {
|
test('should successfully write qseow-proxy log event', async () => {
|
||||||
@@ -126,7 +127,7 @@ describe('v3/log-events', () => {
|
|||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-proxy');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-proxy');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('level', 'WARN');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('level', 'WARN');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write qseow-scheduler log event', async () => {
|
test('should successfully write qseow-scheduler log event', async () => {
|
||||||
@@ -140,7 +141,7 @@ describe('v3/log-events', () => {
|
|||||||
await postLogEventToInfluxdbV3(msg);
|
await postLogEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-scheduler');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-scheduler');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write qseow-repository log event', async () => {
|
test('should successfully write qseow-repository log event', async () => {
|
||||||
@@ -154,7 +155,7 @@ describe('v3/log-events', () => {
|
|||||||
await postLogEventToInfluxdbV3(msg);
|
await postLogEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-repository');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-repository');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write qseow-qix-perf log event', async () => {
|
test('should successfully write qseow-qix-perf log event', async () => {
|
||||||
@@ -178,7 +179,7 @@ describe('v3/log-events', () => {
|
|||||||
await postLogEventToInfluxdbV3(msg);
|
await postLogEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-qix-perf');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-qix-perf');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
@@ -190,7 +191,7 @@ describe('v3/log-events', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await postLogEventToInfluxdbV3(msg);
|
await postLogEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
@@ -221,7 +222,7 @@ describe('v3/log-events', () => {
|
|||||||
'exception_message',
|
'exception_message',
|
||||||
'Exception details'
|
'Exception details'
|
||||||
);
|
);
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -49,6 +49,7 @@ jest.unstable_mockModule('@influxdata/influxdb3-client', () => ({
|
|||||||
jest.unstable_mockModule('../shared/utils.js', () => ({
|
jest.unstable_mockModule('../shared/utils.js', () => ({
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV3: jest.fn(),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
describe('InfluxDB v3 Queue Metrics', () => {
|
describe('InfluxDB v3 Queue Metrics', () => {
|
||||||
@@ -69,7 +70,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
|
|
||||||
// Setup default mocks
|
// Setup default mocks
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeBatchToInfluxV3.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('postUserEventQueueMetricsToInfluxdbV3', () => {
|
describe('postUserEventQueueMetricsToInfluxdbV3', () => {
|
||||||
@@ -79,7 +80,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
|
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
|
||||||
|
|
||||||
expect(Point3).not.toHaveBeenCalled();
|
expect(Point3).not.toHaveBeenCalled();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should warn when queue manager is not initialized', async () => {
|
test('should warn when queue manager is not initialized', async () => {
|
||||||
@@ -102,7 +103,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
|
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
|
||||||
|
|
||||||
expect(Point3).not.toHaveBeenCalled();
|
expect(Point3).not.toHaveBeenCalled();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should successfully write queue metrics', async () => {
|
test('should successfully write queue metrics', async () => {
|
||||||
@@ -142,6 +143,9 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') {
|
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') {
|
||||||
return 'test-db';
|
return 'test-db';
|
||||||
}
|
}
|
||||||
|
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') {
|
||||||
|
return 100;
|
||||||
|
}
|
||||||
return null;
|
return null;
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -153,11 +157,12 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
|
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
|
||||||
|
|
||||||
expect(Point3).toHaveBeenCalledWith('user_events_queue');
|
expect(Point3).toHaveBeenCalledWith('user_events_queue');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
|
'test-db',
|
||||||
'User event queue metrics',
|
'User event queue metrics',
|
||||||
'v3',
|
'user-events-queue',
|
||||||
'user-events-queue'
|
100
|
||||||
);
|
);
|
||||||
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
||||||
'USER EVENT QUEUE METRICS INFLUXDB V3: Sent queue metrics data to InfluxDB v3'
|
'USER EVENT QUEUE METRICS INFLUXDB V3: Sent queue metrics data to InfluxDB v3'
|
||||||
@@ -194,7 +199,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
|
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
|
||||||
|
|
||||||
expect(Point3).not.toHaveBeenCalled();
|
expect(Point3).not.toHaveBeenCalled();
|
||||||
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should warn when queue manager is not initialized', async () => {
|
test('should warn when queue manager is not initialized', async () => {
|
||||||
@@ -246,6 +251,9 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') {
|
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') {
|
||||||
return 'test-db';
|
return 'test-db';
|
||||||
}
|
}
|
||||||
|
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') {
|
||||||
|
return 100;
|
||||||
|
}
|
||||||
return null;
|
return null;
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -257,11 +265,12 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
|
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
|
||||||
|
|
||||||
expect(Point3).toHaveBeenCalledWith('log_events_queue');
|
expect(Point3).toHaveBeenCalledWith('log_events_queue');
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
|
||||||
expect.any(Function),
|
expect.any(Array),
|
||||||
|
'test-db',
|
||||||
'Log event queue metrics',
|
'Log event queue metrics',
|
||||||
'v3',
|
'log-events-queue',
|
||||||
'log-events-queue'
|
100
|
||||||
);
|
);
|
||||||
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
expect(globals.logger.verbose).toHaveBeenCalledWith(
|
||||||
'LOG EVENT QUEUE METRICS INFLUXDB V3: Sent queue metrics data to InfluxDB v3'
|
'LOG EVENT QUEUE METRICS INFLUXDB V3: Sent queue metrics data to InfluxDB v3'
|
||||||
@@ -312,7 +321,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
|
|||||||
clearMetrics: jest.fn(),
|
clearMetrics: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
|
utils.writeBatchToInfluxV3.mockRejectedValue(new Error('Write failed'));
|
||||||
|
|
||||||
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
|
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
|
||||||
|
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV3: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -59,10 +60,13 @@ describe('v3/sessions', () => {
|
|||||||
postProxySessionsToInfluxdbV3 = sessions.postProxySessionsToInfluxdbV3;
|
postProxySessionsToInfluxdbV3 = sessions.postProxySessionsToInfluxdbV3;
|
||||||
|
|
||||||
// Setup default mocks
|
// Setup default mocks
|
||||||
globals.config.get.mockReturnValue('test-db');
|
globals.config.get.mockImplementation((key) => {
|
||||||
globals.influx.write.mockResolvedValue();
|
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') return 'test-db';
|
||||||
|
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
|
||||||
|
return undefined;
|
||||||
|
});
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockImplementation(async (fn) => await fn());
|
utils.writeBatchToInfluxV3.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('postProxySessionsToInfluxdbV3', () => {
|
describe('postProxySessionsToInfluxdbV3', () => {
|
||||||
@@ -80,7 +84,7 @@ describe('v3/sessions', () => {
|
|||||||
|
|
||||||
await postProxySessionsToInfluxdbV3(userSessions);
|
await postProxySessionsToInfluxdbV3(userSessions);
|
||||||
|
|
||||||
expect(globals.influx.write).not.toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should warn when no datapoints to write', async () => {
|
test('should warn when no datapoints to write', async () => {
|
||||||
@@ -115,9 +119,14 @@ describe('v3/sessions', () => {
|
|||||||
|
|
||||||
await postProxySessionsToInfluxdbV3(userSessions);
|
await postProxySessionsToInfluxdbV3(userSessions);
|
||||||
|
|
||||||
expect(globals.influx.write).toHaveBeenCalledTimes(2);
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
|
||||||
expect(globals.influx.write).toHaveBeenCalledWith('session1', 'test-db');
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
|
||||||
expect(globals.influx.write).toHaveBeenCalledWith('session2', 'test-db');
|
[datapoint1, datapoint2],
|
||||||
|
'test-db',
|
||||||
|
'Proxy sessions for server1//vp1',
|
||||||
|
'server1',
|
||||||
|
100
|
||||||
|
);
|
||||||
expect(globals.logger.debug).toHaveBeenCalledWith(
|
expect(globals.logger.debug).toHaveBeenCalledWith(
|
||||||
expect.stringContaining('Wrote 2 datapoints')
|
expect.stringContaining('Wrote 2 datapoints')
|
||||||
);
|
);
|
||||||
@@ -135,7 +144,7 @@ describe('v3/sessions', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
globals.influx.write.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await postProxySessionsToInfluxdbV3(userSessions);
|
await postProxySessionsToInfluxdbV3(userSessions);
|
||||||
|
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
|
|||||||
const mockUtils = {
|
const mockUtils = {
|
||||||
isInfluxDbEnabled: jest.fn(),
|
isInfluxDbEnabled: jest.fn(),
|
||||||
writeToInfluxWithRetry: jest.fn(),
|
writeToInfluxWithRetry: jest.fn(),
|
||||||
|
writeBatchToInfluxV3: jest.fn(),
|
||||||
};
|
};
|
||||||
|
|
||||||
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
|
||||||
@@ -65,6 +66,7 @@ describe('v3/user-events', () => {
|
|||||||
globals.config.get.mockReturnValue('test-db');
|
globals.config.get.mockReturnValue('test-db');
|
||||||
utils.isInfluxDbEnabled.mockReturnValue(true);
|
utils.isInfluxDbEnabled.mockReturnValue(true);
|
||||||
utils.writeToInfluxWithRetry.mockResolvedValue();
|
utils.writeToInfluxWithRetry.mockResolvedValue();
|
||||||
|
utils.writeBatchToInfluxV3.mockResolvedValue();
|
||||||
});
|
});
|
||||||
|
|
||||||
describe('postUserEventToInfluxdbV3', () => {
|
describe('postUserEventToInfluxdbV3', () => {
|
||||||
@@ -117,7 +119,7 @@ describe('v3/user-events', () => {
|
|||||||
|
|
||||||
await postUserEventToInfluxdbV3(msg);
|
await postUserEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('event_action', 'OpenApp');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('event_action', 'OpenApp');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('userDirectory', 'DOMAIN');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('userDirectory', 'DOMAIN');
|
||||||
@@ -136,7 +138,7 @@ describe('v3/user-events', () => {
|
|||||||
|
|
||||||
await postUserEventToInfluxdbV3(msg);
|
await postUserEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('event_action', 'CreateApp');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('event_action', 'CreateApp');
|
||||||
});
|
});
|
||||||
@@ -152,7 +154,7 @@ describe('v3/user-events', () => {
|
|||||||
|
|
||||||
await postUserEventToInfluxdbV3(msg);
|
await postUserEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should handle write errors', async () => {
|
test('should handle write errors', async () => {
|
||||||
@@ -165,7 +167,7 @@ describe('v3/user-events', () => {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const writeError = new Error('Write failed');
|
const writeError = new Error('Write failed');
|
||||||
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
|
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
|
||||||
|
|
||||||
await postUserEventToInfluxdbV3(msg);
|
await postUserEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
@@ -199,7 +201,7 @@ describe('v3/user-events', () => {
|
|||||||
|
|
||||||
await postUserEventToInfluxdbV3(msg);
|
await postUserEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('uaBrowserName', 'Chrome');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('uaBrowserName', 'Chrome');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('uaBrowserMajorVersion', '96');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('uaBrowserMajorVersion', '96');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('uaOsName', 'Windows');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('uaOsName', 'Windows');
|
||||||
@@ -219,7 +221,7 @@ describe('v3/user-events', () => {
|
|||||||
|
|
||||||
await postUserEventToInfluxdbV3(msg);
|
await postUserEventToInfluxdbV3(msg);
|
||||||
|
|
||||||
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
|
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('appId', 'abc-123-def');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('appId', 'abc-123-def');
|
||||||
expect(mockPoint.setStringField).toHaveBeenCalledWith('appId_field', 'abc-123-def');
|
expect(mockPoint.setStringField).toHaveBeenCalledWith('appId_field', 'abc-123-def');
|
||||||
expect(mockPoint.setTag).toHaveBeenCalledWith('appName', 'Sales Dashboard');
|
expect(mockPoint.setTag).toHaveBeenCalledWith('appName', 'Sales Dashboard');
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Posts Butler SOS memory usage metrics to InfluxDB v1.
|
* Posts Butler SOS memory usage metrics to InfluxDB v1.
|
||||||
@@ -50,12 +50,15 @@ export async function storeButlerMemoryV1(memory) {
|
|||||||
)}`
|
)}`
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Get max batch size from config
|
||||||
|
const maxBatchSize = globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize');
|
||||||
|
|
||||||
// Write with retry logic
|
// Write with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints(datapoint),
|
datapoint,
|
||||||
'Memory usage metrics',
|
'Memory usage metrics',
|
||||||
'v1',
|
'INFLUXDB_V1_WRITE',
|
||||||
'' // No specific error category for butler memory
|
maxBatchSize
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('MEMORY USAGE V1: Sent Butler SOS memory usage data to InfluxDB');
|
globals.logger.verbose('MEMORY USAGE V1: Sent Butler SOS memory usage data to InfluxDB');
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Store event count in InfluxDB v1
|
* Store event count in InfluxDB v1
|
||||||
@@ -97,11 +97,11 @@ export async function storeEventCountV1() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Write with retry logic
|
// Write with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints(points),
|
points,
|
||||||
'Event counts',
|
'Event counts',
|
||||||
'v1',
|
'',
|
||||||
''
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('EVENT COUNT V1: Sent event count data to InfluxDB');
|
globals.logger.verbose('EVENT COUNT V1: Sent event count data to InfluxDB');
|
||||||
@@ -222,11 +222,11 @@ export async function storeRejectedEventCountV1() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Write with retry logic
|
// Write with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints(points),
|
points,
|
||||||
'Rejected event counts',
|
'Rejected event counts',
|
||||||
'v1',
|
'',
|
||||||
''
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose(
|
globals.logger.verbose(
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ import {
|
|||||||
getFormattedTime,
|
getFormattedTime,
|
||||||
processAppDocuments,
|
processAppDocuments,
|
||||||
isInfluxDbEnabled,
|
isInfluxDbEnabled,
|
||||||
writeToInfluxWithRetry,
|
writeBatchToInfluxV1,
|
||||||
} from '../shared/utils.js';
|
} from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -185,11 +185,11 @@ export async function storeHealthMetricsV1(serverTags, body) {
|
|||||||
];
|
];
|
||||||
|
|
||||||
// Write to InfluxDB v1 using node-influx library with retry logic
|
// Write to InfluxDB v1 using node-influx library with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints(datapoint),
|
datapoint,
|
||||||
`Health metrics for ${serverTags.server_name}`,
|
`Health metrics for ${serverTags.server_name}`,
|
||||||
'v1',
|
serverTags.server_name,
|
||||||
serverTags.server_name
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose(
|
globals.logger.verbose(
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Post log event to InfluxDB v1
|
* Post log event to InfluxDB v1
|
||||||
@@ -217,11 +217,11 @@ export async function storeLogEventV1(msg) {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Write with retry logic
|
// Write with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints(datapoint),
|
datapoint,
|
||||||
`Log event from ${msg.source}`,
|
`Log event from ${msg.source}`,
|
||||||
'v1',
|
msg.host,
|
||||||
msg.host
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('LOG EVENT V1: Sent log event data to InfluxDB');
|
globals.logger.verbose('LOG EVENT V1: Sent log event data to InfluxDB');
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Store user event queue metrics to InfluxDB v1
|
* Store user event queue metrics to InfluxDB v1
|
||||||
@@ -79,11 +79,11 @@ export async function storeUserEventQueueMetricsV1() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Write with retry logic
|
// Write with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints([point]),
|
[point],
|
||||||
'User event queue metrics',
|
'User event queue metrics',
|
||||||
'v1',
|
'',
|
||||||
''
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('USER EVENT QUEUE METRICS V1: Sent queue metrics data to InfluxDB');
|
globals.logger.verbose('USER EVENT QUEUE METRICS V1: Sent queue metrics data to InfluxDB');
|
||||||
@@ -174,11 +174,11 @@ export async function storeLogEventQueueMetricsV1() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Write with retry logic
|
// Write with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints([point]),
|
[point],
|
||||||
'Log event queue metrics',
|
'Log event queue metrics',
|
||||||
'v1',
|
'',
|
||||||
''
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('LOG EVENT QUEUE METRICS V1: Sent queue metrics data to InfluxDB');
|
globals.logger.verbose('LOG EVENT QUEUE METRICS V1: Sent queue metrics data to InfluxDB');
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Posts proxy sessions data to InfluxDB v1.
|
* Posts proxy sessions data to InfluxDB v1.
|
||||||
@@ -48,11 +48,11 @@ export async function storeSessionsV1(userSessions) {
|
|||||||
|
|
||||||
// Data points are already in InfluxDB v1 format (plain objects)
|
// Data points are already in InfluxDB v1 format (plain objects)
|
||||||
// Write array of measurements with retry logic
|
// Write array of measurements with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints(userSessions.datapointInfluxdb),
|
userSessions.datapointInfluxdb,
|
||||||
`Proxy sessions for ${userSessions.host}/${userSessions.virtualProxy}`,
|
`Proxy sessions for ${userSessions.host}/${userSessions.virtualProxy}`,
|
||||||
'v1',
|
userSessions.serverName,
|
||||||
userSessions.serverName
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.debug(
|
globals.logger.debug(
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Posts a user event to InfluxDB v1.
|
* Posts a user event to InfluxDB v1.
|
||||||
@@ -88,11 +88,11 @@ export async function storeUserEventV1(msg) {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Write with retry logic
|
// Write with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV1(
|
||||||
async () => await globals.influx.writePoints(datapoint),
|
datapoint,
|
||||||
'User event',
|
'User event',
|
||||||
'v1',
|
msg.host,
|
||||||
msg.host
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('USER EVENT V1: Sent user event data to InfluxDB');
|
globals.logger.verbose('USER EVENT V1: Sent user event data to InfluxDB');
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point } from '@influxdata/influxdb-client';
|
import { Point } from '@influxdata/influxdb-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Posts Butler SOS memory usage metrics to InfluxDB v2.
|
* Posts Butler SOS memory usage metrics to InfluxDB v2.
|
||||||
@@ -52,27 +52,13 @@ export async function storeButlerMemoryV2(memory) {
|
|||||||
);
|
);
|
||||||
|
|
||||||
// Write to InfluxDB with retry logic
|
// Write to InfluxDB with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV2(
|
||||||
async () => {
|
[point],
|
||||||
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
|
org,
|
||||||
flushInterval: 5000,
|
bucketName,
|
||||||
maxRetries: 0,
|
|
||||||
});
|
|
||||||
try {
|
|
||||||
await writeApi.writePoint(point);
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (closeErr) {
|
|
||||||
// Ignore close errors
|
|
||||||
}
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'Memory usage metrics',
|
'Memory usage metrics',
|
||||||
'v2',
|
'',
|
||||||
''
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('MEMORY USAGE V2: Sent Butler SOS memory usage data to InfluxDB');
|
globals.logger.verbose('MEMORY USAGE V2: Sent Butler SOS memory usage data to InfluxDB');
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point } from '@influxdata/influxdb-client';
|
import { Point } from '@influxdata/influxdb-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
|
||||||
import { applyInfluxTags } from './utils.js';
|
import { applyInfluxTags } from './utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -75,27 +75,13 @@ export async function storeEventCountV2() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Write to InfluxDB with retry logic
|
// Write to InfluxDB with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV2(
|
||||||
async () => {
|
points,
|
||||||
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
|
org,
|
||||||
flushInterval: 5000,
|
bucketName,
|
||||||
maxRetries: 0,
|
|
||||||
});
|
|
||||||
try {
|
|
||||||
await writeApi.writePoints(points);
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (closeErr) {
|
|
||||||
// Ignore close errors
|
|
||||||
}
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'Event count metrics',
|
'Event count metrics',
|
||||||
'v2',
|
'',
|
||||||
''
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('EVENT COUNT V2: Sent event count data to InfluxDB');
|
globals.logger.verbose('EVENT COUNT V2: Sent event count data to InfluxDB');
|
||||||
@@ -179,27 +165,13 @@ export async function storeRejectedEventCountV2() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Write to InfluxDB with retry logic
|
// Write to InfluxDB with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV2(
|
||||||
async () => {
|
points,
|
||||||
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
|
org,
|
||||||
flushInterval: 5000,
|
bucketName,
|
||||||
maxRetries: 0,
|
|
||||||
});
|
|
||||||
try {
|
|
||||||
await writeApi.writePoints(points);
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (closeErr) {
|
|
||||||
// Ignore close errors
|
|
||||||
}
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'Rejected event count metrics',
|
'Rejected event count metrics',
|
||||||
'v2',
|
'',
|
||||||
''
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('REJECTED EVENT COUNT V2: Sent rejected event count data to InfluxDB');
|
globals.logger.verbose('REJECTED EVENT COUNT V2: Sent rejected event count data to InfluxDB');
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point } from '@influxdata/influxdb-client';
|
import { Point } from '@influxdata/influxdb-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
|
||||||
import { applyInfluxTags } from './utils.js';
|
import { applyInfluxTags } from './utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -216,27 +216,13 @@ export async function storeLogEventV2(msg) {
|
|||||||
globals.logger.silly(`LOG EVENT V2: Influxdb datapoint: ${JSON.stringify(point, null, 2)}`);
|
globals.logger.silly(`LOG EVENT V2: Influxdb datapoint: ${JSON.stringify(point, null, 2)}`);
|
||||||
|
|
||||||
// Write to InfluxDB with retry logic
|
// Write to InfluxDB with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV2(
|
||||||
async () => {
|
[point],
|
||||||
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
|
org,
|
||||||
flushInterval: 5000,
|
bucketName,
|
||||||
maxRetries: 0,
|
|
||||||
});
|
|
||||||
try {
|
|
||||||
await writeApi.writePoint(point);
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (closeErr) {
|
|
||||||
// Ignore close errors
|
|
||||||
}
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
},
|
|
||||||
`Log event for ${msg.host}`,
|
`Log event for ${msg.host}`,
|
||||||
'v2',
|
msg.host,
|
||||||
msg.host
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('LOG EVENT V2: Sent log event data to InfluxDB');
|
globals.logger.verbose('LOG EVENT V2: Sent log event data to InfluxDB');
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point } from '@influxdata/influxdb-client';
|
import { Point } from '@influxdata/influxdb-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
|
||||||
import { applyInfluxTags } from './utils.js';
|
import { applyInfluxTags } from './utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -74,27 +74,13 @@ export async function storeUserEventQueueMetricsV2() {
|
|||||||
applyInfluxTags(point, configTags);
|
applyInfluxTags(point, configTags);
|
||||||
|
|
||||||
// Write to InfluxDB with retry logic
|
// Write to InfluxDB with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV2(
|
||||||
async () => {
|
[point],
|
||||||
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
|
org,
|
||||||
flushInterval: 5000,
|
bucketName,
|
||||||
maxRetries: 0,
|
|
||||||
});
|
|
||||||
try {
|
|
||||||
await writeApi.writePoint(point);
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (closeErr) {
|
|
||||||
// Ignore close errors
|
|
||||||
}
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'User event queue metrics',
|
'User event queue metrics',
|
||||||
'v2',
|
'user-events-queue',
|
||||||
'user-events-queue'
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('USER EVENT QUEUE METRICS V2: Sent queue metrics data to InfluxDB');
|
globals.logger.verbose('USER EVENT QUEUE METRICS V2: Sent queue metrics data to InfluxDB');
|
||||||
@@ -174,27 +160,13 @@ export async function storeLogEventQueueMetricsV2() {
|
|||||||
applyInfluxTags(point, configTags);
|
applyInfluxTags(point, configTags);
|
||||||
|
|
||||||
// Write to InfluxDB with retry logic
|
// Write to InfluxDB with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV2(
|
||||||
async () => {
|
[point],
|
||||||
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
|
org,
|
||||||
flushInterval: 5000,
|
bucketName,
|
||||||
maxRetries: 0,
|
|
||||||
});
|
|
||||||
try {
|
|
||||||
await writeApi.writePoint(point);
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
await writeApi.close();
|
|
||||||
} catch (closeErr) {
|
|
||||||
// Ignore close errors
|
|
||||||
}
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'Log event queue metrics',
|
'Log event queue metrics',
|
||||||
'v2',
|
'log-events-queue',
|
||||||
'log-events-queue'
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose('LOG EVENT QUEUE METRICS V2: Sent queue metrics data to InfluxDB');
|
globals.logger.verbose('LOG EVENT QUEUE METRICS V2: Sent queue metrics data to InfluxDB');
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Posts Butler SOS memory usage metrics to InfluxDB v3.
|
* Posts Butler SOS memory usage metrics to InfluxDB v3.
|
||||||
@@ -48,11 +48,12 @@ export async function postButlerSOSMemoryUsageToInfluxdbV3(memory) {
|
|||||||
|
|
||||||
try {
|
try {
|
||||||
// Convert point to line protocol and write directly with retry logic
|
// Convert point to line protocol and write directly with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV3(
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
[point],
|
||||||
|
database,
|
||||||
'Memory usage metrics',
|
'Memory usage metrics',
|
||||||
'v3',
|
'butler-memory',
|
||||||
'' // No specific server context for Butler memory
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
globals.logger.debug(`MEMORY USAGE V3: Wrote data to InfluxDB v3`);
|
globals.logger.debug(`MEMORY USAGE V3: Wrote data to InfluxDB v3`);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Store event count in InfluxDB v3
|
* Store event count in InfluxDB v3
|
||||||
@@ -40,6 +40,8 @@ export async function storeEventCountInfluxDBV3() {
|
|||||||
const database = globals.config.get('Butler-SOS.influxdbConfig.v3Config.database');
|
const database = globals.config.get('Butler-SOS.influxdbConfig.v3Config.database');
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
const points = [];
|
||||||
|
|
||||||
// Store data for each log event
|
// Store data for each log event
|
||||||
for (const logEvent of logEvents) {
|
for (const logEvent of logEvents) {
|
||||||
const tags = {
|
const tags = {
|
||||||
@@ -80,13 +82,7 @@ export async function storeEventCountInfluxDBV3() {
|
|||||||
point.setTag(key, tags[key]);
|
point.setTag(key, tags[key]);
|
||||||
});
|
});
|
||||||
|
|
||||||
await writeToInfluxWithRetry(
|
points.push(point);
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
|
||||||
'Log event counts',
|
|
||||||
'v3',
|
|
||||||
'log-events'
|
|
||||||
);
|
|
||||||
globals.logger.debug(`EVENT COUNT INFLUXDB V3: Wrote log event data to InfluxDB v3`);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Loop through data in user events and create datapoints
|
// Loop through data in user events and create datapoints
|
||||||
@@ -129,15 +125,19 @@ export async function storeEventCountInfluxDBV3() {
|
|||||||
point.setTag(key, tags[key]);
|
point.setTag(key, tags[key]);
|
||||||
});
|
});
|
||||||
|
|
||||||
await writeToInfluxWithRetry(
|
points.push(point);
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
|
||||||
'User event counts',
|
|
||||||
'v3',
|
|
||||||
'user-events'
|
|
||||||
);
|
|
||||||
globals.logger.debug(`EVENT COUNT INFLUXDB V3: Wrote user event data to InfluxDB v3`);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
await writeBatchToInfluxV3(
|
||||||
|
points,
|
||||||
|
database,
|
||||||
|
'Event counts',
|
||||||
|
'event-counts',
|
||||||
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
|
);
|
||||||
|
|
||||||
|
globals.logger.debug(`EVENT COUNT INFLUXDB V3: Wrote event data to InfluxDB v3`);
|
||||||
|
|
||||||
globals.logger.verbose(
|
globals.logger.verbose(
|
||||||
'EVENT COUNT INFLUXDB V3: Sent Butler SOS event count data to InfluxDB'
|
'EVENT COUNT INFLUXDB V3: Sent Butler SOS event count data to InfluxDB'
|
||||||
);
|
);
|
||||||
@@ -244,14 +244,13 @@ export async function storeRejectedEventCountInfluxDBV3() {
|
|||||||
});
|
});
|
||||||
|
|
||||||
// Write to InfluxDB
|
// Write to InfluxDB
|
||||||
for (const point of points) {
|
await writeBatchToInfluxV3(
|
||||||
await writeToInfluxWithRetry(
|
points,
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
database,
|
||||||
'Rejected event counts',
|
'Rejected event counts',
|
||||||
'v3',
|
'rejected-event-counts',
|
||||||
'rejected-events'
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
}
|
|
||||||
globals.logger.debug(`REJECT LOG EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
|
globals.logger.debug(`REJECT LOG EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
|
||||||
|
|
||||||
globals.logger.verbose(
|
globals.logger.verbose(
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import {
|
|||||||
processAppDocuments,
|
processAppDocuments,
|
||||||
isInfluxDbEnabled,
|
isInfluxDbEnabled,
|
||||||
applyTagsToPoint3,
|
applyTagsToPoint3,
|
||||||
writeToInfluxWithRetry,
|
writeBatchToInfluxV3,
|
||||||
validateUnsignedField,
|
validateUnsignedField,
|
||||||
} from '../shared/utils.js';
|
} from '../shared/utils.js';
|
||||||
|
|
||||||
@@ -239,13 +239,16 @@ export async function postHealthMetricsToInfluxdbV3(serverName, host, body, serv
|
|||||||
for (const point of points) {
|
for (const point of points) {
|
||||||
// Apply server tags to each point
|
// Apply server tags to each point
|
||||||
applyTagsToPoint3(point, serverTags);
|
applyTagsToPoint3(point, serverTags);
|
||||||
await writeToInfluxWithRetry(
|
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
|
||||||
`Health metrics for ${host}`,
|
|
||||||
'v3',
|
|
||||||
serverName
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
await writeBatchToInfluxV3(
|
||||||
|
points,
|
||||||
|
database,
|
||||||
|
`Health metrics for ${host}`,
|
||||||
|
'health-metrics',
|
||||||
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
|
);
|
||||||
|
|
||||||
globals.logger.debug(`HEALTH METRICS V3: Wrote data to InfluxDB v3`);
|
globals.logger.debug(`HEALTH METRICS V3: Wrote data to InfluxDB v3`);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
// Track error count
|
// Track error count
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Clean tag values for InfluxDB v3 line protocol
|
* Clean tag values for InfluxDB v3 line protocol
|
||||||
@@ -295,11 +295,12 @@ export async function postLogEventToInfluxdbV3(msg) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV3(
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
[point],
|
||||||
|
database,
|
||||||
`Log event for ${msg.host}`,
|
`Log event for ${msg.host}`,
|
||||||
'v3',
|
'log-events',
|
||||||
msg.host
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.debug(`LOG EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
|
globals.logger.debug(`LOG EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Store user event queue metrics to InfluxDB v3
|
* Store user event queue metrics to InfluxDB v3
|
||||||
@@ -77,11 +77,12 @@ export async function postUserEventQueueMetricsToInfluxdbV3() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV3(
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
[point],
|
||||||
|
database,
|
||||||
'User event queue metrics',
|
'User event queue metrics',
|
||||||
'v3',
|
'user-events-queue',
|
||||||
'user-events-queue'
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose(
|
globals.logger.verbose(
|
||||||
@@ -171,11 +172,12 @@ export async function postLogEventQueueMetricsToInfluxdbV3() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV3(
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
[point],
|
||||||
|
database,
|
||||||
'Log event queue metrics',
|
'Log event queue metrics',
|
||||||
'v3',
|
'log-events-queue',
|
||||||
'log-events-queue'
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
|
|
||||||
globals.logger.verbose(
|
globals.logger.verbose(
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Posts proxy sessions data to InfluxDB v3.
|
* Posts proxy sessions data to InfluxDB v3.
|
||||||
@@ -39,14 +39,14 @@ export async function postProxySessionsToInfluxdbV3(userSessions) {
|
|||||||
// The datapointInfluxdb array contains summary points and individual session details
|
// The datapointInfluxdb array contains summary points and individual session details
|
||||||
try {
|
try {
|
||||||
if (userSessions.datapointInfluxdb && userSessions.datapointInfluxdb.length > 0) {
|
if (userSessions.datapointInfluxdb && userSessions.datapointInfluxdb.length > 0) {
|
||||||
for (const point of userSessions.datapointInfluxdb) {
|
await writeBatchToInfluxV3(
|
||||||
await writeToInfluxWithRetry(
|
userSessions.datapointInfluxdb,
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
database,
|
||||||
`Proxy sessions for ${userSessions.host}/${userSessions.virtualProxy}`,
|
`Proxy sessions for ${userSessions.host}/${userSessions.virtualProxy}`,
|
||||||
'v3',
|
userSessions.host,
|
||||||
userSessions.host
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
}
|
|
||||||
globals.logger.debug(
|
globals.logger.debug(
|
||||||
`PROXY SESSIONS V3: Wrote ${userSessions.datapointInfluxdb.length} datapoints to InfluxDB v3`
|
`PROXY SESSIONS V3: Wrote ${userSessions.datapointInfluxdb.length} datapoints to InfluxDB v3`
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
import { Point as Point3 } from '@influxdata/influxdb3-client';
|
||||||
import globals from '../../../globals.js';
|
import globals from '../../../globals.js';
|
||||||
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
|
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Sanitize tag values for InfluxDB line protocol.
|
* Sanitize tag values for InfluxDB line protocol.
|
||||||
@@ -100,11 +100,12 @@ export async function postUserEventToInfluxdbV3(msg) {
|
|||||||
// Write to InfluxDB
|
// Write to InfluxDB
|
||||||
try {
|
try {
|
||||||
// Convert point to line protocol and write directly with retry logic
|
// Convert point to line protocol and write directly with retry logic
|
||||||
await writeToInfluxWithRetry(
|
await writeBatchToInfluxV3(
|
||||||
async () => await globals.influx.write(point.toLineProtocol(), database),
|
[point],
|
||||||
|
database,
|
||||||
`User event for ${msg.host}`,
|
`User event for ${msg.host}`,
|
||||||
'v3',
|
msg.host,
|
||||||
msg.host
|
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
|
||||||
);
|
);
|
||||||
globals.logger.debug(`USER EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
|
globals.logger.debug(`USER EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
|
|||||||
Reference in New Issue
Block a user