fix(influxdb)!: Move InfluxDB maxBatchSize config setting to sit right under influxdbConfig section in YAML config file

This commit is contained in:
Göran Sander
2025-12-17 22:30:36 +01:00
parent 1c3f3a336a
commit 2bdb21629b
43 changed files with 526 additions and 886 deletions

View File

@@ -1,414 +0,0 @@
# Butler SOS Insider Build Automatic Deployment Setup
This document describes the setup required to enable automatic deployment of Butler SOS insider builds to the testing server.
## Overview
The GitHub Actions workflow `insiders-build.yaml` now includes automatic deployment of Windows insider builds to the `host2-win` server. After a successful build, the deployment job will:
1. Download the Windows installer build artifact
2. Stop the "Butler SOS insiders build" Windows service
3. Replace the binary with the new version
4. Start the service again
5. Verify the deployment was successful
## Manual Setup Required
### 1. GitHub Variables Configuration (Optional)
The deployment workflow supports configurable properties via GitHub repository variables. All have sensible defaults, so configuration is optional:
| Variable Name | Description | Default Value |
| ------------------------------------ | ---------------------------------------------------- | --------------------------- |
| `BUTLER_SOS_INSIDER_DEPLOY_RUNNER` | GitHub runner name/label to use for deployment | `host2-win` |
| `BUTLER_SOS_INSIDER_SERVICE_NAME` | Windows service name for Butler SOS | `Butler SOS insiders build` |
| `BUTLER_SOS_INSIDER_DEPLOY_PATH` | Directory path where to deploy the binary | `C:\butler-sos-insider` |
| `BUTLER_SOS_INSIDER_SERVICE_TIMEOUT` | Timeout in seconds for service stop/start operations | `30` |
| `BUTLER_SOS_INSIDER_DOWNLOAD_PATH` | Temporary download path for artifacts | `./download` |
**To configure GitHub variables:**
1. Go to your repository → Settings → Secrets and variables → Actions
2. Click on the "Variables" tab
3. Click "New repository variable"
4. Add any of the above variable names with your desired values
5. The workflow will automatically use these values, falling back to defaults if not set
**Example customization:**
```yaml
# Set custom runner name
BUTLER_SOS_INSIDER_DEPLOY_RUNNER: "my-custom-runner"
# Use different service name
BUTLER_SOS_INSIDER_SERVICE_NAME: "Butler SOS Testing Service"
# Deploy to different directory
BUTLER_SOS_INSIDER_DEPLOY_PATH: "D:\Apps\butler-sos-test"
# Increase timeout for slower systems
BUTLER_SOS_INSIDER_SERVICE_TIMEOUT: "60"
```
### 2. GitHub Runner Configuration
On the deployment server (default: `host2-win`, configurable via `BUTLER_SOS_INSIDER_DEPLOY_RUNNER` variable), ensure the GitHub runner is configured with:
**Runner Labels:**
- The runner must be labeled to match the `BUTLER_SOS_INSIDER_DEPLOY_RUNNER` variable value (default: `host2-win`)
**Permissions:**
- The runner service account must have permission to:
- Stop and start Windows services
- Write to the deployment directory (default: `C:\butler-sos-insider`, configurable via `BUTLER_SOS_INSIDER_DEPLOY_PATH`)
- Execute PowerShell scripts
**PowerShell Execution Policy:**
```powershell
# Run as Administrator
Set-ExecutionPolicy RemoteSigned -Scope LocalMachine
```
### 3. Windows Service Setup
Create a Windows service. The service name and deployment path can be customized via GitHub repository variables (see section 1).
**Default values:**
- Service Name: `"Butler SOS insiders build"` (configurable via `BUTLER_SOS_INSIDER_SERVICE_NAME`)
- Deploy Path: `C:\butler-sos-insider` (configurable via `BUTLER_SOS_INSIDER_DEPLOY_PATH`)
**Option A: Using NSSM (Non-Sucking Service Manager) - Recommended**
NSSM is a popular tool for creating Windows services from executables and provides better service management capabilities.
First, download and install NSSM:
1. Download NSSM from https://nssm.cc/download
2. Extract to a location like `C:\nssm`
3. Add `C:\nssm\win64` (or `win32`) to your system PATH
```cmd
REM Run as Administrator
REM Install the service
nssm install "Butler SOS insiders build" "C:\butler-sos-insider\butler-sos.exe"
REM Set service parameters
nssm set "Butler SOS insiders build" AppParameters "--config C:\butler-sos-insider\config\production_template.yaml"
nssm set "Butler SOS insiders build" AppDirectory "C:\butler-sos-insider"
nssm set "Butler SOS insiders build" DisplayName "Butler SOS insiders build"
nssm set "Butler SOS insiders build" Description "Butler SOS insider build for testing"
nssm set "Butler SOS insiders build" Start SERVICE_DEMAND_START
REM Optional: Set up logging
nssm set "Butler SOS insiders build" AppStdout "C:\butler-sos-insider\logs\stdout.log"
nssm set "Butler SOS insiders build" AppStderr "C:\butler-sos-insider\logs\stderr.log"
REM Optional: Set service account (default is Local System)
REM nssm set "Butler SOS insiders build" ObjectName ".\ServiceAccount" "password"
```
**NSSM Service Management Commands:**
```cmd
REM Start the service
nssm start "Butler SOS insiders build"
REM Stop the service
nssm stop "Butler SOS insiders build"
REM Restart the service
nssm restart "Butler SOS insiders build"
REM Check service status
nssm status "Butler SOS insiders build"
REM Remove the service (if needed)
nssm remove "Butler SOS insiders build" confirm
REM Edit service configuration
nssm edit "Butler SOS insiders build"
```
**Using NSSM with PowerShell:**
```powershell
# Run as Administrator
$serviceName = "Butler SOS insiders build"
$exePath = "C:\butler-sos-insider\butler-sos.exe"
$configPath = "C:\butler-sos-insider\config\production_template.yaml"
# Install service
& nssm install $serviceName $exePath
& nssm set $serviceName AppParameters "--config $configPath"
& nssm set $serviceName AppDirectory "C:\butler-sos-insider"
& nssm set $serviceName DisplayName $serviceName
& nssm set $serviceName Description "Butler SOS insider build for testing"
& nssm set $serviceName Start SERVICE_DEMAND_START
# Create logs directory
New-Item -ItemType Directory -Path "C:\butler-sos-insider\logs" -Force
# Set up logging
& nssm set $serviceName AppStdout "C:\butler-sos-insider\logs\stdout.log"
& nssm set $serviceName AppStderr "C:\butler-sos-insider\logs\stderr.log"
Write-Host "Service '$serviceName' installed successfully with NSSM"
```
**Option B: Using PowerShell**
```powershell
# Run as Administrator
$serviceName = "Butler SOS insiders build"
$exePath = "C:\butler-sos-insider\butler-sos.exe"
$configPath = "C:\butler-sos-insider\config\production_template.yaml"
# Create the service
New-Service -Name $serviceName -BinaryPathName "$exePath --config $configPath" -DisplayName $serviceName -Description "Butler SOS insider build for testing" -StartupType Manual
# Set service to run as Local System or specify custom account
# For custom account:
# $credential = Get-Credential
# $service = Get-WmiObject -Class Win32_Service -Filter "Name='$serviceName'"
# $service.Change($null,$null,$null,$null,$null,$null,$credential.UserName,$credential.GetNetworkCredential().Password)
```
**Option C: Using SC command**
```cmd
REM Run as Administrator
sc create "Butler SOS insiders build" binPath="C:\butler-sos-insider\butler-sos.exe --config C:\butler-sos-insider\config\production_template.yaml" DisplayName="Butler SOS insiders build" start=demand
```
**Option C: Using Windows Service Manager (services.msc)**
1. Open Services management console
2. Right-click and select "Create Service"
3. Fill in the details:
- Service Name: `Butler SOS insiders build`
- Display Name: `Butler SOS insiders build`
- Path to executable: `C:\butler-sos-insider\butler-sos.exe`
- Startup Type: Manual or Automatic as preferred
**Option D: Using NSSM (Non-Sucking Service Manager) - Recommended**
NSSM is a popular tool for creating Windows services from executables and provides better service management capabilities.
First, download and install NSSM:
1. Download NSSM from https://nssm.cc/download
2. Extract to a location like `C:\nssm`
3. Add `C:\nssm\win64` (or `win32`) to your system PATH
```cmd
REM Run as Administrator
REM Install the service
nssm install "Butler SOS insiders build" "C:\butler-sos-insider\butler-sos.exe"
REM Set service parameters
nssm set "Butler SOS insiders build" AppParameters "--config C:\butler-sos-insider\config\production_template.yaml"
nssm set "Butler SOS insiders build" AppDirectory "C:\butler-sos-insider"
nssm set "Butler SOS insiders build" DisplayName "Butler SOS insiders build"
nssm set "Butler SOS insiders build" Description "Butler SOS insider build for testing"
nssm set "Butler SOS insiders build" Start SERVICE_DEMAND_START
REM Optional: Set up logging
nssm set "Butler SOS insiders build" AppStdout "C:\butler-sos-insider\logs\stdout.log"
nssm set "Butler SOS insiders build" AppStderr "C:\butler-sos-insider\logs\stderr.log"
REM Optional: Set service account (default is Local System)
REM nssm set "Butler SOS insiders build" ObjectName ".\ServiceAccount" "password"
```
**NSSM Service Management Commands:**
```cmd
REM Start the service
nssm start "Butler SOS insiders build"
REM Stop the service
nssm stop "Butler SOS insiders build"
REM Restart the service
nssm restart "Butler SOS insiders build"
REM Check service status
nssm status "Butler SOS insiders build"
REM Remove the service (if needed)
nssm remove "Butler SOS insiders build" confirm
REM Edit service configuration
nssm edit "Butler SOS insiders build"
```
**Using NSSM with PowerShell:**
```powershell
# Run as Administrator
$serviceName = "Butler SOS insiders build"
$exePath = "C:\butler-sos-insider\butler-sos.exe"
$configPath = "C:\butler-sos-insider\config\production_template.yaml"
# Install service
& nssm install $serviceName $exePath
& nssm set $serviceName AppParameters "--config $configPath"
& nssm set $serviceName AppDirectory "C:\butler-sos-insider"
& nssm set $serviceName DisplayName $serviceName
& nssm set $serviceName Description "Butler SOS insider build for testing"
& nssm set $serviceName Start SERVICE_DEMAND_START
# Create logs directory
New-Item -ItemType Directory -Path "C:\butler-sos-insider\logs" -Force
# Set up logging
& nssm set $serviceName AppStdout "C:\butler-sos-insider\logs\stdout.log"
& nssm set $serviceName AppStderr "C:\butler-sos-insider\logs\stderr.log"
Write-Host "Service '$serviceName' installed successfully with NSSM"
```
### 4. Directory Setup
Create the deployment directory with proper permissions:
```powershell
# Run as Administrator
$deployPath = "C:\butler-sos-insider"
$runnerUser = "NT SERVICE\github-runner" # Adjust based on your runner service account
# Create directory
New-Item -ItemType Directory -Path $deployPath -Force
# Grant permissions to the runner service account
$acl = Get-Acl $deployPath
$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($runnerUser, "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")
$acl.SetAccessRule($accessRule)
Set-Acl -Path $deployPath -AclObject $acl
Write-Host "Directory created and permissions set for: $deployPath"
```
### 4. Service Permissions
Grant the GitHub runner service account permission to manage the Butler SOS service:
```powershell
# Run as Administrator
# Download and use the SubInACL tool or use PowerShell with .NET classes
# Option A: Using PowerShell (requires additional setup)
$serviceName = "Butler SOS insiders build"
$runnerUser = "NT SERVICE\github-runner" # Adjust based on your runner service account
# This is a simplified example - you may need more advanced permission management
# depending on your security requirements
Write-Host "Service permissions need to be configured manually using Group Policy or SubInACL"
Write-Host "Grant '$runnerUser' the following rights:"
Write-Host "- Log on as a service"
Write-Host "- Start and stop services"
Write-Host "- Manage service permissions for '$serviceName'"
```
## Testing the Deployment
### Manual Test
To manually test the deployment process:
1. Trigger the insider build workflow in GitHub Actions
2. Monitor the workflow logs for the `deploy-windows-insider` job
3. Check that the service stops and starts properly
4. Verify the new binary is deployed to `C:\butler-sos-insider`
### Troubleshooting
**Common Issues:**
1. **Service not found:**
- Ensure the service name is exactly `"Butler SOS insiders build"`
- Check that the service was created successfully
- If using NSSM: `nssm status "Butler SOS insiders build"`
2. **Permission denied:**
- Verify the GitHub runner has service management permissions
- Check directory permissions for `C:\butler-sos-insider`
- If using NSSM: Ensure NSSM is in system PATH and accessible to the runner account
3. **Service won't start:**
- Check the service configuration and binary path
- Review Windows Event Logs for service startup errors
- Ensure the configuration file is present and valid
- **If using NSSM:**
- Check service configuration: `nssm get "Butler SOS insiders build" AppDirectory`
- Check parameters: `nssm get "Butler SOS insiders build" AppParameters`
- Review NSSM logs in `C:\butler-sos-insider\logs\` (if configured)
- Use `nssm edit "Butler SOS insiders build"` to open the GUI editor
4. **GitHub Runner not found:**
- Verify the runner is labeled as `host2-win`
- Ensure the runner is online and accepting jobs
5. **NSSM-specific issues:**
- **NSSM not found:** Ensure NSSM is installed and in system PATH
- **Service already exists:** Use `nssm remove "Butler SOS insiders build" confirm` to remove and recreate
- **Wrong parameters:** Use `nssm set "Butler SOS insiders build" AppParameters "new-parameters"`
- **Logging issues:** Verify the logs directory exists and has write permissions
**NSSM Diagnostic Commands:**
```cmd
REM Check if NSSM is available
nssm version
REM Get all service parameters
nssm dump "Butler SOS insiders build"
REM Check specific configuration
nssm get "Butler SOS insiders build" Application
nssm get "Butler SOS insiders build" AppDirectory
nssm get "Butler SOS insiders build" AppParameters
nssm get "Butler SOS insiders build" Start
REM View service status
nssm status "Butler SOS insiders build"
```
**Log Locations:**
- GitHub Actions logs: Available in the workflow run details
- Windows Event Logs: Check System and Application logs
- Service logs: Check Butler SOS application logs if configured
- **NSSM logs** (if using NSSM with logging enabled):
- stdout: `C:\butler-sos-insider\logs\stdout.log`
- stderr: `C:\butler-sos-insider\logs\stderr.log`
## Configuration Files
The deployment includes the configuration template and log appender files in the zip package:
- `config/production_template.yaml` - Main configuration template
- `config/log_appender_xml/` - Log4j configuration files
Adjust the service binary path to point to your actual configuration file location if different from the template.
## Security Considerations
- The deployment uses PowerShell scripts with `continue-on-error: true` to prevent workflow failures
- Service management requires elevated permissions - ensure the GitHub runner runs with appropriate privileges
- Consider using a dedicated service account rather than Local System for better security
- Monitor deployment logs for any security-related issues
## Support
If you encounter issues with the automatic deployment:
1. Check the GitHub Actions workflow logs for detailed error messages
2. Verify the manual setup steps were completed correctly
3. Test service operations manually before relying on automation
4. Consider running a test deployment on a non-production system first

View File

@@ -506,28 +506,26 @@ Butler-SOS:
host: influxdb.mycompany.com # InfluxDB host, hostname, FQDN or IP address
port: 8086 # Port where InfluxDBdb is listening, usually 8086
version: 2 # Is the InfluxDB instance version 1.x or 2.x? Valid values are 1, 2, or 3
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
v3Config: # Settings for InfluxDB v3.x only, i.e. Butler-SOS.influxdbConfig.version=3
database: mydatabase
description: Butler SOS metrics
token: mytoken
retentionDuration: 10d
timeout: 10000 # Optional: Socket timeout in milliseconds (default: 10000)
timeout: 10000 # Optional: Socket timeout in milliseconds (writing to InfluxDB) (default: 10000)
queryTimeout: 60000 # Optional: Query timeout in milliseconds (default: 60000)
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
v2Config: # Settings for InfluxDB v2.x only, i.e. Butler-SOS.influxdbConfig.version=2
org: myorg
bucket: mybucket
description: Butler SOS metrics
token: mytoken
retentionDuration: 10d
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
v1Config: # Settings below are for InfluxDB v1.x only, i.e. Butler-SOS.influxdbConfig.version=1
auth:
enable: false # Does influxdb instance require authentication (true/false)?
username: <username> # Username for Influxdb authentication. Mandatory if auth.enable=true
password: <password> # Password for Influxdb authentication. Mandatory if auth.enable=true
dbName: senseops
maxBatchSize: 1000 # Maximum number of data points to write in a single batch. If a batch fails, progressive retry with smaller sizes (1000→500→250→100→10→1) will be attempted. Valid range: 1-10000.
# Default retention policy that should be created in InfluxDB when Butler SOS creates a new database there.
# Any data older than retention policy threshold will be purged from InfluxDB.
retentionPolicy:

View File

@@ -179,9 +179,8 @@ export async function verifyAppConfig(cfg) {
return false;
}
// Validate and set default for maxBatchSize based on version
const versionConfig = `v${influxdbVersion}Config`;
const maxBatchSizePath = `Butler-SOS.influxdbConfig.${versionConfig}.maxBatchSize`;
// Validate and set default for maxBatchSize
const maxBatchSizePath = `Butler-SOS.influxdbConfig.maxBatchSize`;
if (cfg.has(maxBatchSizePath)) {
const maxBatchSize = cfg.get(maxBatchSizePath);

View File

@@ -316,6 +316,14 @@ export const destinationsSchema = {
},
port: { type: 'number' },
version: { type: 'number' },
maxBatchSize: {
type: 'number',
description:
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
default: 1000,
minimum: 1,
maximum: 10000,
},
v3Config: {
type: 'object',
properties: {
@@ -335,14 +343,6 @@ export const destinationsSchema = {
default: 60000,
minimum: 1000,
},
maxBatchSize: {
type: 'number',
description:
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
default: 1000,
minimum: 1,
maximum: 10000,
},
},
required: ['database', 'description', 'token', 'retentionDuration'],
additionalProperties: false,
@@ -355,14 +355,6 @@ export const destinationsSchema = {
description: { type: 'string' },
token: { type: 'string' },
retentionDuration: { type: 'string' },
maxBatchSize: {
type: 'number',
description:
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
default: 1000,
minimum: 1,
maximum: 10000,
},
},
required: ['org', 'bucket', 'description', 'token', 'retentionDuration'],
additionalProperties: false,
@@ -393,14 +385,6 @@ export const destinationsSchema = {
required: ['name', 'duration'],
additionalProperties: false,
},
maxBatchSize: {
type: 'number',
description:
'Maximum number of data points to write in a single batch. Progressive retry with smaller sizes attempted on failure.',
default: 1000,
minimum: 1,
maximum: 10000,
},
},
required: ['auth', 'dbName', 'retentionPolicy'],
additionalProperties: false,

View File

@@ -31,6 +31,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV1: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -50,7 +51,11 @@ describe('v1/butler-memory', () => {
// Setup default mocks
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV1.mockResolvedValue();
globals.config.get.mockImplementation((key) => {
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
return undefined;
});
});
describe('storeButlerMemoryV1', () => {
@@ -67,7 +72,7 @@ describe('v1/butler-memory', () => {
await storeButlerMemoryV1(memory);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
expect(globals.logger.debug).toHaveBeenCalledWith(
expect.stringContaining('MEMORY USAGE V1')
);
@@ -84,11 +89,11 @@ describe('v1/butler-memory', () => {
await storeButlerMemoryV1(memory);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
'Memory usage metrics',
'v1',
''
'INFLUXDB_V1_WRITE',
100
);
expect(globals.logger.verbose).toHaveBeenCalledWith(
'MEMORY USAGE V1: Sent Butler SOS memory usage data to InfluxDB'
@@ -104,27 +109,49 @@ describe('v1/butler-memory', () => {
processMemoryMByte: 350.5,
};
utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
await writeFn();
utils.writeBatchToInfluxV1.mockImplementation(async (writeFn) => {
// writeFn is the batch array in the new implementation
// But wait, mockImplementation receives the arguments passed to the function.
// writeBatchToInfluxV1(batch, logMessage, instanceTag, batchSize)
// So the first argument is the batch.
// The test expects globals.influx.writePoints to be called.
// But writeBatchToInfluxV1 calls globals.influx.writePoints internally.
// If we mock writeBatchToInfluxV1, we bypass the internal call.
// So we should NOT mock implementation if we want to test the datapoint structure via globals.influx.writePoints?
// Or we should inspect the batch passed to writeBatchToInfluxV1.
});
// The original test was:
// utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
// await writeFn();
// });
// Because writeToInfluxWithRetry took a function that generated points and wrote them.
// Now writeBatchToInfluxV1 takes the points directly.
// So we can just inspect the arguments of writeBatchToInfluxV1.
await storeButlerMemoryV1(memory);
expect(globals.influx.writePoints).toHaveBeenCalledWith([
{
measurement: 'butlersos_memory_usage',
tags: {
butler_sos_instance: 'test-instance',
version: '1.0.0',
},
fields: {
heap_used: 150.5,
heap_total: 300.75,
external: 75.25,
process_memory: 350.5,
},
},
]);
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.arrayContaining([
expect.objectContaining({
measurement: 'butlersos_memory_usage',
tags: {
butler_sos_instance: 'test-instance',
version: '1.0.0',
},
fields: {
heap_used: 150.5,
heap_total: 300.75,
external: 75.25,
process_memory: 350.5,
},
})
]),
expect.any(String),
expect.any(String),
expect.any(Number)
);
});
test('should handle write errors and rethrow', async () => {
@@ -137,7 +164,7 @@ describe('v1/butler-memory', () => {
};
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV1.mockRejectedValue(writeError);
await expect(storeButlerMemoryV1(memory)).rejects.toThrow('Write failed');

View File

@@ -24,6 +24,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV1: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -43,6 +44,7 @@ describe('v1/event-counts', () => {
globals.config.get.mockImplementation((path) => {
if (path.includes('measurementName')) return 'event_counts';
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
@@ -58,6 +60,7 @@ describe('v1/event-counts', () => {
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV1.mockResolvedValue();
});
test('should return early when no events', async () => {
@@ -70,16 +73,16 @@ describe('v1/event-counts', () => {
test('should return early when InfluxDB disabled', async () => {
utils.isInfluxDbEnabled.mockReturnValue(false);
await storeEventCountV1();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should write event counts', async () => {
await storeEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
'Event counts',
'v1',
''
'',
100
);
});
@@ -90,7 +93,7 @@ describe('v1/event-counts', () => {
]);
globals.udpEvents.getUserEvents.mockResolvedValue([]);
await storeEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should apply config tags to user events', async () => {
@@ -100,7 +103,7 @@ describe('v1/event-counts', () => {
{ source: 'qseow-proxy', host: 'host2', subsystem: 'Session', counter: 7 },
]);
await storeEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle mixed log and user events', async () => {
@@ -111,29 +114,29 @@ describe('v1/event-counts', () => {
{ source: 'qseow-proxy', host: 'host2', subsystem: 'User', counter: 3 },
]);
await storeEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
'Event counts',
'v1',
''
'',
100
);
});
test('should handle write errors', async () => {
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
await expect(storeEventCountV1()).rejects.toThrow();
expect(globals.logger.error).toHaveBeenCalled();
});
test('should write rejected event counts', async () => {
await storeRejectedEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should return early when no rejected events', async () => {
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue([]);
await storeRejectedEventCountV1();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should return early when InfluxDB disabled for rejected events', async () => {
@@ -142,7 +145,7 @@ describe('v1/event-counts', () => {
]);
utils.isInfluxDbEnabled.mockReturnValue(false);
await storeRejectedEventCountV1();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should handle rejected qix-perf events with appName', async () => {
@@ -163,7 +166,7 @@ describe('v1/event-counts', () => {
return null;
});
await storeRejectedEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle rejected qix-perf events without appName', async () => {
@@ -180,7 +183,7 @@ describe('v1/event-counts', () => {
]);
globals.config.has.mockReturnValue(false);
await storeRejectedEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle rejected non-qix-perf events', async () => {
@@ -193,14 +196,14 @@ describe('v1/event-counts', () => {
},
]);
await storeRejectedEventCountV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle rejected events write errors', async () => {
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue([
{ source: 'test', counter: 1 },
]);
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
await expect(storeRejectedEventCountV1()).rejects.toThrow();
expect(globals.logger.error).toHaveBeenCalled();
});

View File

@@ -23,6 +23,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV1: jest.fn(),
processAppDocuments: jest.fn(),
getFormattedTime: jest.fn(() => '2024-01-01T00:00:00Z'),
};
@@ -43,11 +44,13 @@ describe('v1/health-metrics', () => {
globals.config.get.mockImplementation((path) => {
if (path.includes('measurementName')) return 'health_metrics';
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV1.mockResolvedValue();
utils.processAppDocuments.mockResolvedValue({ appNames: [], sessionAppNames: [] });
});
@@ -55,7 +58,7 @@ describe('v1/health-metrics', () => {
utils.isInfluxDbEnabled.mockReturnValue(false);
const body = { mem: {}, apps: {}, cpu: {}, session: {}, users: {}, cache: {} };
await storeHealthMetricsV1({ server: 'server1' }, body);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should write complete health metrics', async () => {
@@ -73,17 +76,24 @@ describe('v1/health-metrics', () => {
users: { active: 3, total: 8 },
cache: { hits: 100, lookups: 120, added: 20, replaced: 5, bytes_added: 1024 },
saturated: false,
version: '1.0.0',
started: '2024-01-01T00:00:00Z',
};
const serverTags = { server_name: 'server1', server_description: 'Test server' };
await storeHealthMetricsV1(serverTags, body);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
'Health metrics for server1',
'server1',
100
);
expect(utils.processAppDocuments).toHaveBeenCalledTimes(3);
});
test('should handle write errors', async () => {
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
const body = {
mem: {},
apps: { active_docs: [], loaded_docs: [], in_memory_docs: [] },
@@ -92,7 +102,7 @@ describe('v1/health-metrics', () => {
users: {},
cache: {},
};
await expect(storeHealthMetricsV1({}, body)).rejects.toThrow();
await expect(storeHealthMetricsV1({ server_name: 'server1' }, body)).rejects.toThrow();
expect(globals.logger.error).toHaveBeenCalled();
});
@@ -108,8 +118,10 @@ describe('v1/health-metrics', () => {
session: {},
users: {},
cache: {},
version: '1.0.0',
started: '2024-01-01T00:00:00Z',
};
await storeHealthMetricsV1({}, body);
await storeHealthMetricsV1({ server_name: 'server1' }, body);
expect(utils.processAppDocuments).toHaveBeenCalledTimes(3);
});
@@ -119,6 +131,7 @@ describe('v1/health-metrics', () => {
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
if (path.includes('includeFields.activeDocs')) return true;
if (path.includes('enableAppNameExtract')) return true;
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
utils.processAppDocuments.mockResolvedValue({
@@ -132,9 +145,11 @@ describe('v1/health-metrics', () => {
session: { active: 5 },
users: { active: 3 },
cache: { hits: 100 },
version: '1.0.0',
started: '2024-01-01T00:00:00Z',
};
await storeHealthMetricsV1({}, body);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
await storeHealthMetricsV1({ server_name: 'server1' }, body);
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle config with loadedDocs enabled', async () => {
@@ -143,6 +158,7 @@ describe('v1/health-metrics', () => {
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
if (path.includes('includeFields.loadedDocs')) return true;
if (path.includes('enableAppNameExtract')) return true;
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
utils.processAppDocuments.mockResolvedValue({
@@ -156,9 +172,11 @@ describe('v1/health-metrics', () => {
session: { active: 5 },
users: { active: 3 },
cache: { hits: 100 },
version: '1.0.0',
started: '2024-01-01T00:00:00Z',
};
await storeHealthMetricsV1({}, body);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
await storeHealthMetricsV1({ server_name: 'server1' }, body);
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle config with inMemoryDocs enabled', async () => {
@@ -167,6 +185,7 @@ describe('v1/health-metrics', () => {
if (path.includes('tags')) return [{ name: 'env', value: 'prod' }];
if (path.includes('includeFields.inMemoryDocs')) return true;
if (path.includes('enableAppNameExtract')) return true;
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
utils.processAppDocuments.mockResolvedValue({
@@ -180,9 +199,11 @@ describe('v1/health-metrics', () => {
session: { active: 5 },
users: { active: 3 },
cache: { hits: 100 },
version: '1.0.0',
started: '2024-01-01T00:00:00Z',
};
await storeHealthMetricsV1({}, body);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
await storeHealthMetricsV1({ server_name: 'server1' }, body);
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle config with all doc types disabled', async () => {
@@ -191,6 +212,7 @@ describe('v1/health-metrics', () => {
if (path.includes('tags')) return [];
if (path.includes('includeFields')) return false;
if (path.includes('enableAppNameExtract')) return false;
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
const body = {
@@ -200,8 +222,10 @@ describe('v1/health-metrics', () => {
session: { active: 5 },
users: { active: 3 },
cache: { hits: 100 },
version: '1.0.0',
started: '2024-01-01T00:00:00Z',
};
await storeHealthMetricsV1({}, body);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
await storeHealthMetricsV1({ server_name: 'server1' }, body);
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
});

View File

@@ -22,6 +22,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV1: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -36,21 +37,25 @@ describe('v1/log-events', () => {
const logEvents = await import('../v1/log-events.js');
storeLogEventV1 = logEvents.storeLogEventV1;
globals.config.has.mockReturnValue(true);
globals.config.get.mockReturnValue([{ name: 'env', value: 'prod' }]);
globals.config.get.mockImplementation((path) => {
if (path.includes('maxBatchSize')) return 100;
return [{ name: 'env', value: 'prod' }];
});
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV1.mockResolvedValue();
});
test('should return early when InfluxDB disabled', async () => {
utils.isInfluxDbEnabled.mockReturnValue(false);
await storeLogEventV1({ source: 'qseow-engine', host: 'server1' });
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should warn for unsupported source', async () => {
await storeLogEventV1({ source: 'unknown', host: 'server1' });
expect(globals.logger.warn).toHaveBeenCalledWith(expect.stringContaining('Unsupported'));
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should write qseow-engine event', async () => {
@@ -62,11 +67,11 @@ describe('v1/log-events', () => {
subsystem: 'System',
message: 'test',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
'Log event from qseow-engine',
'v1',
'server1'
'server1',
100
);
});
@@ -79,7 +84,7 @@ describe('v1/log-events', () => {
subsystem: 'Proxy',
message: 'test',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should write qseow-scheduler event', async () => {
@@ -91,7 +96,7 @@ describe('v1/log-events', () => {
subsystem: 'Scheduler',
message: 'test',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should write qseow-repository event', async () => {
@@ -103,7 +108,7 @@ describe('v1/log-events', () => {
subsystem: 'Repository',
message: 'test',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should write qseow-qix-perf event', async () => {
@@ -115,11 +120,11 @@ describe('v1/log-events', () => {
subsystem: 'Perf',
message: 'test',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle write errors', async () => {
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
await expect(
storeLogEventV1({
source: 'qseow-engine',
@@ -146,7 +151,7 @@ describe('v1/log-events', () => {
{ name: 'component', value: 'engine' },
],
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should apply config tags when available', async () => {
@@ -163,7 +168,7 @@ describe('v1/log-events', () => {
subsystem: 'Proxy',
message: 'test',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle events without categories', async () => {
@@ -176,7 +181,7 @@ describe('v1/log-events', () => {
message: 'test',
category: [],
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle engine event with all optional fields', async () => {
@@ -203,7 +208,7 @@ describe('v1/log-events', () => {
context: 'DocSession',
session_id: 'sess-001',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle engine event without optional fields', async () => {
@@ -219,7 +224,7 @@ describe('v1/log-events', () => {
user_id: '',
result_code: '',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle proxy event with optional fields', async () => {
@@ -238,7 +243,7 @@ describe('v1/log-events', () => {
origin: 'Proxy',
context: 'AuthSession',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle scheduler event with task fields', async () => {
@@ -258,7 +263,7 @@ describe('v1/log-events', () => {
app_id: 'finance-001',
execution_id: 'exec-999',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle repository event with optional fields', async () => {
@@ -277,7 +282,7 @@ describe('v1/log-events', () => {
origin: 'Repository',
context: 'API',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle qix-perf event with all fields', async () => {
@@ -301,7 +306,7 @@ describe('v1/log-events', () => {
object_id: 'obj-123',
process_time: 150,
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
test('should handle qix-perf event with missing optional fields', async () => {
@@ -324,6 +329,6 @@ describe('v1/log-events', () => {
app_name: '',
object_id: '',
});
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).toHaveBeenCalled();
});
});

View File

@@ -43,6 +43,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV1: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -110,16 +111,17 @@ describe('v1/queue-metrics', () => {
if (path.includes('measurementName')) return 'queue_metrics';
if (path.includes('queueMetrics.influxdb.tags'))
return [{ name: 'env', value: 'prod' }];
if (path === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
return undefined;
});
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV1.mockResolvedValue();
});
test('should return early when InfluxDB disabled for user events', async () => {
utils.isInfluxDbEnabled.mockReturnValue(false);
await storeUserEventQueueMetricsV1();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should return early when config disabled', async () => {
@@ -128,7 +130,7 @@ describe('v1/queue-metrics', () => {
return undefined;
});
await storeUserEventQueueMetricsV1();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should return early when queue manager not initialized', async () => {
@@ -137,21 +139,22 @@ describe('v1/queue-metrics', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('not initialized')
);
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should write user event queue metrics', async () => {
await storeUserEventQueueMetricsV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
expect.stringContaining('User event queue metrics'),
'v1',
''
'',
100
);
expect(globals.udpQueueManagerUserActivity.clearMetrics).toHaveBeenCalled();
});
test('should handle user event write errors', async () => {
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
await expect(storeUserEventQueueMetricsV1()).rejects.toThrow();
expect(globals.logger.error).toHaveBeenCalled();
});
@@ -159,7 +162,7 @@ describe('v1/queue-metrics', () => {
test('should return early when InfluxDB disabled for log events', async () => {
utils.isInfluxDbEnabled.mockReturnValue(false);
await storeLogEventQueueMetricsV1();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should return early when config disabled for log events', async () => {
@@ -168,7 +171,7 @@ describe('v1/queue-metrics', () => {
return undefined;
});
await storeLogEventQueueMetricsV1();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should return early when log queue manager not initialized', async () => {
@@ -177,21 +180,22 @@ describe('v1/queue-metrics', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('not initialized')
);
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should write log event queue metrics', async () => {
await storeLogEventQueueMetricsV1();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
expect.stringContaining('Log event queue metrics'),
'v1',
''
'',
100
);
expect(globals.udpQueueManagerLogEvents.clearMetrics).toHaveBeenCalled();
});
test('should handle log event write errors', async () => {
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
utils.writeBatchToInfluxV1.mockRejectedValue(new Error('Write failed'));
await expect(storeLogEventQueueMetricsV1()).rejects.toThrow();
expect(globals.logger.error).toHaveBeenCalled();
});

View File

@@ -31,6 +31,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV1: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -51,6 +52,11 @@ describe('v1/sessions', () => {
// Setup default mocks
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV1.mockResolvedValue();
globals.config.get.mockImplementation((path) => {
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
});
describe('storeSessionsV1', () => {
@@ -68,7 +74,7 @@ describe('v1/sessions', () => {
await storeSessionsV1(userSessions);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should return early when no datapoints', async () => {
@@ -86,7 +92,7 @@ describe('v1/sessions', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
'PROXY SESSIONS V1: No datapoints to write to InfluxDB'
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should successfully write session data', async () => {
@@ -112,11 +118,11 @@ describe('v1/sessions', () => {
await storeSessionsV1(userSessions);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
'Proxy sessions for server1/vp1',
'v1',
'central'
'central',
100
);
expect(globals.logger.verbose).toHaveBeenCalledWith(
expect.stringContaining('Sent user session data to InfluxDB')
@@ -146,13 +152,14 @@ describe('v1/sessions', () => {
datapointInfluxdb: datapoints,
};
utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
await writeFn();
});
await storeSessionsV1(userSessions);
expect(globals.influx.writePoints).toHaveBeenCalledWith(datapoints);
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
datapoints,
expect.any(String),
'central',
100
);
});
test('should handle write errors', async () => {
@@ -166,7 +173,7 @@ describe('v1/sessions', () => {
};
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV1.mockRejectedValue(writeError);
await expect(storeSessionsV1(userSessions)).rejects.toThrow('Write failed');

View File

@@ -32,6 +32,7 @@ const mockUtils = {
isInfluxDbEnabled: jest.fn(),
getConfigTags: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV1: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -51,9 +52,13 @@ describe('v1/user-events', () => {
// Setup default mocks
globals.config.has.mockReturnValue(true);
globals.config.get.mockReturnValue([{ name: 'env', value: 'prod' }]);
globals.config.get.mockImplementation((key) => {
if (key === 'Butler-SOS.userEvents.tags') return [{ name: 'env', value: 'prod' }];
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
return null;
});
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV1.mockResolvedValue();
});
describe('storeUserEventV1', () => {
@@ -70,7 +75,7 @@ describe('v1/user-events', () => {
await storeUserEventV1(msg);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should successfully write user event', async () => {
@@ -84,11 +89,11 @@ describe('v1/user-events', () => {
await storeUserEventV1(msg);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expect.any(Array),
'User event',
'v1',
'server1'
'server1',
100
);
expect(globals.logger.verbose).toHaveBeenCalledWith(
'USER EVENT V1: Sent user event data to InfluxDB'
@@ -108,7 +113,7 @@ describe('v1/user-events', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('Missing required field')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should validate required fields - missing command', async () => {
@@ -124,7 +129,7 @@ describe('v1/user-events', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('Missing required field')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should validate required fields - missing user_directory', async () => {
@@ -140,7 +145,7 @@ describe('v1/user-events', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('Missing required field')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should validate required fields - missing user_id', async () => {
@@ -156,7 +161,7 @@ describe('v1/user-events', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('Missing required field')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should validate required fields - missing origin', async () => {
@@ -172,7 +177,7 @@ describe('v1/user-events', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('Missing required field')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV1).not.toHaveBeenCalled();
});
test('should create correct datapoint with config tags', async () => {
@@ -184,10 +189,6 @@ describe('v1/user-events', () => {
origin: 'AppAccess',
};
utils.writeToInfluxWithRetry.mockImplementation(async (writeFn) => {
await writeFn();
});
await storeUserEventV1(msg);
const expectedDatapoint = expect.arrayContaining([
@@ -209,7 +210,12 @@ describe('v1/user-events', () => {
}),
]);
expect(globals.influx.writePoints).toHaveBeenCalledWith(expectedDatapoint);
expect(utils.writeBatchToInfluxV1).toHaveBeenCalledWith(
expectedDatapoint,
'User event',
'server1',
100
);
});
test('should handle write errors', async () => {
@@ -222,7 +228,7 @@ describe('v1/user-events', () => {
};
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV1.mockRejectedValue(writeError);
await expect(storeUserEventV1(msg)).rejects.toThrow('Write failed');

View File

@@ -34,6 +34,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV2: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -56,11 +57,11 @@ describe('v2/butler-memory', () => {
globals.config.get.mockImplementation((path) => {
if (path.includes('org')) return 'test-org';
if (path.includes('bucket')) return 'test-bucket';
if (path.includes('maxBatchSize')) return 100;
return undefined;
});
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockImplementation(async (fn) => await fn());
mockWriteApi.writePoint.mockResolvedValue(undefined);
});
@@ -74,12 +75,12 @@ describe('v2/butler-memory', () => {
processMemoryMByte: 250,
};
await storeButlerMemoryV2(memory);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
});
test('should return early with invalid memory data', async () => {
await storeButlerMemoryV2(null);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
expect(globals.logger.warn).toHaveBeenCalledWith(
'MEMORY USAGE V2: Invalid memory data provided'
);
@@ -87,7 +88,7 @@ describe('v2/butler-memory', () => {
test('should return early with non-object memory data', async () => {
await storeButlerMemoryV2('not an object');
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
expect(globals.logger.warn).toHaveBeenCalled();
});
@@ -109,9 +110,14 @@ describe('v2/butler-memory', () => {
expect(mockPoint.floatField).toHaveBeenCalledWith('heap_total', 300.2);
expect(mockPoint.floatField).toHaveBeenCalledWith('external', 75.8);
expect(mockPoint.floatField).toHaveBeenCalledWith('process_memory', 400.1);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(mockWriteApi.writePoint).toHaveBeenCalled();
expect(mockWriteApi.close).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
[mockPoint],
'test-org',
'test-bucket',
'Memory usage metrics',
'',
100
);
expect(globals.logger.verbose).toHaveBeenCalledWith(
'MEMORY USAGE V2: Sent Butler SOS memory usage data to InfluxDB'
);
@@ -129,7 +135,7 @@ describe('v2/butler-memory', () => {
await storeButlerMemoryV2(memory);
expect(mockPoint.floatField).toHaveBeenCalledWith('heap_used', 0);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should log silly level debug info', async () => {

View File

@@ -51,6 +51,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV2: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -129,9 +130,7 @@ describe('v2/event-counts', () => {
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 200);
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 100);
expect(mockV2Utils.applyInfluxTags).toHaveBeenCalledTimes(2);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(mockWriteApi.writePoints).toHaveBeenCalled();
expect(mockWriteApi.close).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should handle zero counts', async () => {
@@ -184,7 +183,7 @@ describe('v2/event-counts', () => {
expect(mockPoint.tag).toHaveBeenCalledWith('source', 'qseow-proxy');
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 5);
expect(mockPoint.intField).toHaveBeenCalledWith('counter', 3);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should handle empty rejection tags', async () => {

View File

@@ -38,6 +38,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV2: jest.fn(),
processAppDocuments: jest.fn(),
getFormattedTime: jest.fn(() => '2 days, 3 hours'),
};

View File

@@ -35,6 +35,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV2: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -99,8 +100,8 @@ describe('v2/log-events', () => {
};
await storeLogEventV2(msg);
// Implementation doesn't explicitly validate required fields, it just processes what's there
// So this test will actually call writeToInfluxWithRetry
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
// So this test will actually call writeBatchToInfluxV2
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should return early with unsupported source', async () => {
@@ -114,7 +115,7 @@ describe('v2/log-events', () => {
};
await storeLogEventV2(msg);
expect(globals.logger.warn).toHaveBeenCalled();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
});
test('should write engine log event', async () => {
@@ -167,7 +168,7 @@ describe('v2/log-events', () => {
expect(mockPoint.stringField).toHaveBeenCalledWith('context', 'Init');
expect(mockPoint.stringField).toHaveBeenCalledWith('session_id', '');
expect(mockPoint.stringField).toHaveBeenCalledWith('raw_event', expect.any(String));
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should write proxy log event', async () => {
@@ -194,7 +195,7 @@ describe('v2/log-events', () => {
expect(mockPoint.tag).toHaveBeenCalledWith('result_code', '403');
expect(mockPoint.stringField).toHaveBeenCalledWith('command', 'Login');
expect(mockPoint.stringField).toHaveBeenCalledWith('result_code_field', '403');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should write repository log event', async () => {
@@ -216,7 +217,7 @@ describe('v2/log-events', () => {
'exception_message',
'Connection timeout'
);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should write scheduler log event', async () => {
@@ -237,7 +238,7 @@ describe('v2/log-events', () => {
expect(mockPoint.tag).toHaveBeenCalledWith('level', 'INFO');
expect(mockPoint.tag).toHaveBeenCalledWith('task_id', 'sched-task-001');
expect(mockPoint.tag).toHaveBeenCalledWith('task_name', 'Daily Reload');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should handle log event with minimal fields', async () => {
@@ -256,7 +257,7 @@ describe('v2/log-events', () => {
expect(mockPoint.tag).toHaveBeenCalledWith('source', 'qseow-engine');
expect(mockPoint.tag).toHaveBeenCalledWith('level', 'DEBUG');
expect(mockPoint.stringField).toHaveBeenCalledWith('message', 'Debug message');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should handle empty string fields', async () => {
@@ -274,7 +275,7 @@ describe('v2/log-events', () => {
await storeLogEventV2(msg);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalled();
});
test('should apply config tags', async () => {

View File

@@ -42,6 +42,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV2: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -76,6 +77,7 @@ describe('v2/queue-metrics', () => {
if (path.includes('queueMetrics.influxdb.tags'))
return [{ name: 'env', value: 'prod' }];
if (path.includes('enable')) return true;
if (path === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
return undefined;
});
globals.config.has.mockReturnValue(true);
@@ -84,7 +86,7 @@ describe('v2/queue-metrics', () => {
globals.udpQueueManagerLogEvents = mockQueueManager;
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockImplementation(async (cb) => await cb());
utils.writeBatchToInfluxV2.mockResolvedValue();
mockWriteApi.writePoint.mockResolvedValue(undefined);
mockWriteApi.close.mockResolvedValue(undefined);
@@ -114,7 +116,7 @@ describe('v2/queue-metrics', () => {
test('should return early when InfluxDB disabled', async () => {
utils.isInfluxDbEnabled.mockReturnValue(false);
await storeUserEventQueueMetricsV2();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
});
test('should return early when feature disabled', async () => {
@@ -123,13 +125,13 @@ describe('v2/queue-metrics', () => {
return undefined;
});
await storeUserEventQueueMetricsV2();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
});
test('should return early when queue manager not initialized', async () => {
globals.udpQueueManagerUserActivity = null;
await storeUserEventQueueMetricsV2();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
expect(globals.logger.warn).toHaveBeenCalledWith(
'USER EVENT QUEUE METRICS V2: Queue manager not initialized'
);
@@ -161,9 +163,14 @@ describe('v2/queue-metrics', () => {
expect(mockV2Utils.applyInfluxTags).toHaveBeenCalledWith(mockPoint, [
{ name: 'env', value: 'prod' },
]);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(mockWriteApi.writePoint).toHaveBeenCalledWith(mockPoint);
expect(mockWriteApi.close).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
[mockPoint],
'test-org',
'test-bucket',
'User event queue metrics',
'user-events-queue',
100
);
expect(mockQueueManager.clearMetrics).toHaveBeenCalled();
});
@@ -191,7 +198,14 @@ describe('v2/queue-metrics', () => {
await storeUserEventQueueMetricsV2();
expect(mockPoint.intField).toHaveBeenCalledWith('queue_size', 0);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
[mockPoint],
'test-org',
'test-bucket',
'User event queue metrics',
'user-events-queue',
100
);
});
test('should log verbose information', async () => {
@@ -207,7 +221,7 @@ describe('v2/queue-metrics', () => {
test('should return early when InfluxDB disabled', async () => {
utils.isInfluxDbEnabled.mockReturnValue(false);
await storeLogEventQueueMetricsV2();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
});
test('should return early when feature disabled', async () => {
@@ -216,13 +230,13 @@ describe('v2/queue-metrics', () => {
return undefined;
});
await storeLogEventQueueMetricsV2();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
});
test('should return early when queue manager not initialized', async () => {
globals.udpQueueManagerLogEvents = null;
await storeLogEventQueueMetricsV2();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).not.toHaveBeenCalled();
expect(globals.logger.warn).toHaveBeenCalledWith(
'LOG EVENT QUEUE METRICS V2: Queue manager not initialized'
);
@@ -235,7 +249,14 @@ describe('v2/queue-metrics', () => {
expect(mockPoint.tag).toHaveBeenCalledWith('queue_type', 'log_events');
expect(mockPoint.tag).toHaveBeenCalledWith('host', 'test-host');
expect(mockPoint.intField).toHaveBeenCalledWith('queue_size', 100);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
[mockPoint],
'test-org',
'test-bucket',
'Log event queue metrics',
'log-events-queue',
100
);
expect(mockQueueManager.clearMetrics).toHaveBeenCalled();
});
@@ -264,7 +285,14 @@ describe('v2/queue-metrics', () => {
expect(mockPoint.floatField).toHaveBeenCalledWith('queue_utilization_pct', 95.0);
expect(mockPoint.intField).toHaveBeenCalledWith('backpressure_active', 1);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV2).toHaveBeenCalledWith(
[mockPoint],
'test-org',
'test-bucket',
'Log event queue metrics',
'log-events-queue',
100
);
});
test('should log verbose information', async () => {

View File

@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({ default: mockGlobals })
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV2: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);

View File

@@ -33,6 +33,7 @@ jest.unstable_mockModule('@influxdata/influxdb-client', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV2: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);

View File

@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV3: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -61,7 +62,7 @@ describe('v3/butler-memory', () => {
// Setup default mocks
globals.config.get.mockReturnValue('test-db');
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV3.mockResolvedValue();
});
describe('postButlerSOSMemoryUsageToInfluxdbV3', () => {
@@ -78,7 +79,7 @@ describe('v3/butler-memory', () => {
await postButlerSOSMemoryUsageToInfluxdbV3(memory);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should successfully write memory usage metrics', async () => {
@@ -98,7 +99,7 @@ describe('v3/butler-memory', () => {
expect(mockPoint.setFloatField).toHaveBeenCalledWith('heap_total', 200.75);
expect(mockPoint.setFloatField).toHaveBeenCalledWith('external', 50.25);
expect(mockPoint.setFloatField).toHaveBeenCalledWith('process_memory', 250.5);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
test('should handle write errors', async () => {
@@ -111,7 +112,7 @@ describe('v3/butler-memory', () => {
};
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
await postButlerSOSMemoryUsageToInfluxdbV3(memory);

View File

@@ -40,6 +40,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV3: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -78,11 +79,13 @@ describe('v3/event-counts', () => {
return 'event_count';
if (key === 'Butler-SOS.qlikSenseEvents.rejectedEventCount.influxdb.measurementName')
return 'rejected_event_count';
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
return null;
});
globals.config.has.mockReturnValue(false);
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV3.mockResolvedValue();
});
describe('storeEventCountInfluxDBV3', () => {
@@ -128,7 +131,7 @@ describe('v3/event-counts', () => {
await storeEventCountInfluxDBV3();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(2);
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
expect(mockPoint.setTag).toHaveBeenCalledWith('event_type', 'log');
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-engine');
expect(mockPoint.setIntegerField).toHaveBeenCalledWith('counter', 10);
@@ -148,7 +151,7 @@ describe('v3/event-counts', () => {
await storeEventCountInfluxDBV3();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(1);
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
expect(mockPoint.setTag).toHaveBeenCalledWith('event_type', 'user');
expect(mockPoint.setIntegerField).toHaveBeenCalledWith('counter', 15);
});
@@ -166,7 +169,7 @@ describe('v3/event-counts', () => {
await storeEventCountInfluxDBV3();
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(2);
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
});
test('should apply config tags when available', async () => {
@@ -199,7 +202,7 @@ describe('v3/event-counts', () => {
globals.udpEvents.getUserEvents.mockResolvedValue([]);
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
await storeEventCountInfluxDBV3();
@@ -218,7 +221,7 @@ describe('v3/event-counts', () => {
expect(globals.logger.verbose).toHaveBeenCalledWith(
expect.stringContaining('No events to store')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should return early when InfluxDB is disabled', async () => {
@@ -227,7 +230,7 @@ describe('v3/event-counts', () => {
await storeRejectedEventCountInfluxDBV3();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should store rejected log events successfully', async () => {
@@ -248,7 +251,7 @@ describe('v3/event-counts', () => {
await storeRejectedEventCountInfluxDBV3();
// Should have written the rejected event
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
expect(globals.logger.debug).toHaveBeenCalledWith(
expect.stringContaining('Wrote data to InfluxDB v3')
);
@@ -266,7 +269,7 @@ describe('v3/event-counts', () => {
globals.rejectedEvents.getRejectedLogEvents.mockResolvedValue(logEvents);
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
await storeRejectedEventCountInfluxDBV3();

View File

@@ -34,6 +34,7 @@ const mockUtils = {
isInfluxDbEnabled: jest.fn(),
applyTagsToPoint3: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV3: jest.fn(),
validateUnsignedField: jest.fn((value) =>
typeof value === 'number' && value >= 0 ? value : 0
),
@@ -80,6 +81,7 @@ describe('v3/health-metrics', () => {
if (key === 'Butler-SOS.influxdbConfig.includeFields.loadedDocs') return true;
if (key === 'Butler-SOS.influxdbConfig.includeFields.inMemoryDocs') return true;
if (key === 'Butler-SOS.appNames.enableAppNameExtract') return true;
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
return false;
});
@@ -89,7 +91,7 @@ describe('v3/health-metrics', () => {
sessionAppNames: ['SessionApp1'],
});
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV3.mockResolvedValue();
utils.applyTagsToPoint3.mockImplementation(() => {});
// Setup influxWriteApi
@@ -148,7 +150,7 @@ describe('v3/health-metrics', () => {
await postHealthMetricsToInfluxdbV3('test-server', 'test-host', body, {});
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should warn and return when influxWriteApi is not initialized', async () => {
@@ -160,7 +162,7 @@ describe('v3/health-metrics', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('Influxdb write API object not initialized')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should warn and return when writeApi not found for server', async () => {
@@ -171,7 +173,7 @@ describe('v3/health-metrics', () => {
expect(globals.logger.warn).toHaveBeenCalledWith(
expect.stringContaining('Influxdb write API object not found for host test-host')
);
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should process and write all health metrics successfully', async () => {
@@ -201,8 +203,15 @@ describe('v3/health-metrics', () => {
// Should apply tags to all 8 points
expect(utils.applyTagsToPoint3).toHaveBeenCalledTimes(8);
// Should write all 8 measurements
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledTimes(8);
// Should write all 8 measurements in one batch
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
expect.any(Array),
'test-db',
expect.stringContaining('Health metrics for'),
'health-metrics',
100
);
});
test('should call getFormattedTime with started timestamp', async () => {
@@ -231,7 +240,7 @@ describe('v3/health-metrics', () => {
test('should handle write errors with error tracking', async () => {
const body = createMockBody();
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
await postHealthMetricsToInfluxdbV3('test-server', 'test-host', body, {});

View File

@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV3: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -110,7 +111,7 @@ describe('v3/log-events', () => {
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-engine');
expect(mockPoint.setTag).toHaveBeenCalledWith('level', 'INFO');
expect(mockPoint.setStringField).toHaveBeenCalledWith('message', 'Test message');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
test('should successfully write qseow-proxy log event', async () => {
@@ -126,7 +127,7 @@ describe('v3/log-events', () => {
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-proxy');
expect(mockPoint.setTag).toHaveBeenCalledWith('level', 'WARN');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
test('should successfully write qseow-scheduler log event', async () => {
@@ -140,7 +141,7 @@ describe('v3/log-events', () => {
await postLogEventToInfluxdbV3(msg);
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-scheduler');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
test('should successfully write qseow-repository log event', async () => {
@@ -154,7 +155,7 @@ describe('v3/log-events', () => {
await postLogEventToInfluxdbV3(msg);
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-repository');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
test('should successfully write qseow-qix-perf log event', async () => {
@@ -178,7 +179,7 @@ describe('v3/log-events', () => {
await postLogEventToInfluxdbV3(msg);
expect(mockPoint.setTag).toHaveBeenCalledWith('source', 'qseow-qix-perf');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
test('should handle write errors', async () => {
@@ -190,7 +191,7 @@ describe('v3/log-events', () => {
};
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
await postLogEventToInfluxdbV3(msg);
@@ -221,7 +222,7 @@ describe('v3/log-events', () => {
'exception_message',
'Exception details'
);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
});
});

View File

@@ -49,6 +49,7 @@ jest.unstable_mockModule('@influxdata/influxdb3-client', () => ({
jest.unstable_mockModule('../shared/utils.js', () => ({
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV3: jest.fn(),
}));
describe('InfluxDB v3 Queue Metrics', () => {
@@ -69,7 +70,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
// Setup default mocks
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV3.mockResolvedValue();
});
describe('postUserEventQueueMetricsToInfluxdbV3', () => {
@@ -79,7 +80,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
expect(Point3).not.toHaveBeenCalled();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should warn when queue manager is not initialized', async () => {
@@ -102,7 +103,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
expect(Point3).not.toHaveBeenCalled();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should successfully write queue metrics', async () => {
@@ -142,6 +143,9 @@ describe('InfluxDB v3 Queue Metrics', () => {
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') {
return 'test-db';
}
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') {
return 100;
}
return null;
});
@@ -153,11 +157,12 @@ describe('InfluxDB v3 Queue Metrics', () => {
await queueMetrics.postUserEventQueueMetricsToInfluxdbV3();
expect(Point3).toHaveBeenCalledWith('user_events_queue');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
expect.any(Array),
'test-db',
'User event queue metrics',
'v3',
'user-events-queue'
'user-events-queue',
100
);
expect(globals.logger.verbose).toHaveBeenCalledWith(
'USER EVENT QUEUE METRICS INFLUXDB V3: Sent queue metrics data to InfluxDB v3'
@@ -194,7 +199,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
expect(Point3).not.toHaveBeenCalled();
expect(utils.writeToInfluxWithRetry).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should warn when queue manager is not initialized', async () => {
@@ -246,6 +251,9 @@ describe('InfluxDB v3 Queue Metrics', () => {
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') {
return 'test-db';
}
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') {
return 100;
}
return null;
});
@@ -257,11 +265,12 @@ describe('InfluxDB v3 Queue Metrics', () => {
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();
expect(Point3).toHaveBeenCalledWith('log_events_queue');
expect(utils.writeToInfluxWithRetry).toHaveBeenCalledWith(
expect.any(Function),
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
expect.any(Array),
'test-db',
'Log event queue metrics',
'v3',
'log-events-queue'
'log-events-queue',
100
);
expect(globals.logger.verbose).toHaveBeenCalledWith(
'LOG EVENT QUEUE METRICS INFLUXDB V3: Sent queue metrics data to InfluxDB v3'
@@ -312,7 +321,7 @@ describe('InfluxDB v3 Queue Metrics', () => {
clearMetrics: jest.fn(),
};
utils.writeToInfluxWithRetry.mockRejectedValue(new Error('Write failed'));
utils.writeBatchToInfluxV3.mockRejectedValue(new Error('Write failed'));
await queueMetrics.postLogEventQueueMetricsToInfluxdbV3();

View File

@@ -30,6 +30,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV3: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -59,10 +60,13 @@ describe('v3/sessions', () => {
postProxySessionsToInfluxdbV3 = sessions.postProxySessionsToInfluxdbV3;
// Setup default mocks
globals.config.get.mockReturnValue('test-db');
globals.influx.write.mockResolvedValue();
globals.config.get.mockImplementation((key) => {
if (key === 'Butler-SOS.influxdbConfig.v3Config.database') return 'test-db';
if (key === 'Butler-SOS.influxdbConfig.maxBatchSize') return 100;
return undefined;
});
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockImplementation(async (fn) => await fn());
utils.writeBatchToInfluxV3.mockResolvedValue();
});
describe('postProxySessionsToInfluxdbV3', () => {
@@ -80,7 +84,7 @@ describe('v3/sessions', () => {
await postProxySessionsToInfluxdbV3(userSessions);
expect(globals.influx.write).not.toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).not.toHaveBeenCalled();
});
test('should warn when no datapoints to write', async () => {
@@ -115,9 +119,14 @@ describe('v3/sessions', () => {
await postProxySessionsToInfluxdbV3(userSessions);
expect(globals.influx.write).toHaveBeenCalledTimes(2);
expect(globals.influx.write).toHaveBeenCalledWith('session1', 'test-db');
expect(globals.influx.write).toHaveBeenCalledWith('session2', 'test-db');
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledTimes(1);
expect(utils.writeBatchToInfluxV3).toHaveBeenCalledWith(
[datapoint1, datapoint2],
'test-db',
'Proxy sessions for server1//vp1',
'server1',
100
);
expect(globals.logger.debug).toHaveBeenCalledWith(
expect.stringContaining('Wrote 2 datapoints')
);
@@ -135,7 +144,7 @@ describe('v3/sessions', () => {
};
const writeError = new Error('Write failed');
globals.influx.write.mockRejectedValue(writeError);
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
await postProxySessionsToInfluxdbV3(userSessions);

View File

@@ -31,6 +31,7 @@ jest.unstable_mockModule('../../../globals.js', () => ({
const mockUtils = {
isInfluxDbEnabled: jest.fn(),
writeToInfluxWithRetry: jest.fn(),
writeBatchToInfluxV3: jest.fn(),
};
jest.unstable_mockModule('../shared/utils.js', () => mockUtils);
@@ -65,6 +66,7 @@ describe('v3/user-events', () => {
globals.config.get.mockReturnValue('test-db');
utils.isInfluxDbEnabled.mockReturnValue(true);
utils.writeToInfluxWithRetry.mockResolvedValue();
utils.writeBatchToInfluxV3.mockResolvedValue();
});
describe('postUserEventToInfluxdbV3', () => {
@@ -117,7 +119,7 @@ describe('v3/user-events', () => {
await postUserEventToInfluxdbV3(msg);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
expect(mockPoint.setTag).toHaveBeenCalledWith('event_action', 'OpenApp');
expect(mockPoint.setTag).toHaveBeenCalledWith('userDirectory', 'DOMAIN');
@@ -136,7 +138,7 @@ describe('v3/user-events', () => {
await postUserEventToInfluxdbV3(msg);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
expect(mockPoint.setTag).toHaveBeenCalledWith('host', 'server1');
expect(mockPoint.setTag).toHaveBeenCalledWith('event_action', 'CreateApp');
});
@@ -152,7 +154,7 @@ describe('v3/user-events', () => {
await postUserEventToInfluxdbV3(msg);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
});
test('should handle write errors', async () => {
@@ -165,7 +167,7 @@ describe('v3/user-events', () => {
};
const writeError = new Error('Write failed');
utils.writeToInfluxWithRetry.mockRejectedValue(writeError);
utils.writeBatchToInfluxV3.mockRejectedValue(writeError);
await postUserEventToInfluxdbV3(msg);
@@ -199,7 +201,7 @@ describe('v3/user-events', () => {
await postUserEventToInfluxdbV3(msg);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
expect(mockPoint.setTag).toHaveBeenCalledWith('uaBrowserName', 'Chrome');
expect(mockPoint.setTag).toHaveBeenCalledWith('uaBrowserMajorVersion', '96');
expect(mockPoint.setTag).toHaveBeenCalledWith('uaOsName', 'Windows');
@@ -219,7 +221,7 @@ describe('v3/user-events', () => {
await postUserEventToInfluxdbV3(msg);
expect(utils.writeToInfluxWithRetry).toHaveBeenCalled();
expect(utils.writeBatchToInfluxV3).toHaveBeenCalled();
expect(mockPoint.setTag).toHaveBeenCalledWith('appId', 'abc-123-def');
expect(mockPoint.setStringField).toHaveBeenCalledWith('appId_field', 'abc-123-def');
expect(mockPoint.setTag).toHaveBeenCalledWith('appName', 'Sales Dashboard');

View File

@@ -1,5 +1,5 @@
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
/**
* Posts Butler SOS memory usage metrics to InfluxDB v1.
@@ -50,12 +50,15 @@ export async function storeButlerMemoryV1(memory) {
)}`
);
// Get max batch size from config
const maxBatchSize = globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize');
// Write with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints(datapoint),
await writeBatchToInfluxV1(
datapoint,
'Memory usage metrics',
'v1',
'' // No specific error category for butler memory
'INFLUXDB_V1_WRITE',
maxBatchSize
);
globals.logger.verbose('MEMORY USAGE V1: Sent Butler SOS memory usage data to InfluxDB');

View File

@@ -1,5 +1,5 @@
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
/**
* Store event count in InfluxDB v1
@@ -97,11 +97,11 @@ export async function storeEventCountV1() {
}
// Write with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints(points),
await writeBatchToInfluxV1(
points,
'Event counts',
'v1',
''
'',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('EVENT COUNT V1: Sent event count data to InfluxDB');
@@ -222,11 +222,11 @@ export async function storeRejectedEventCountV1() {
}
// Write with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints(points),
await writeBatchToInfluxV1(
points,
'Rejected event counts',
'v1',
''
'',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose(

View File

@@ -3,7 +3,7 @@ import {
getFormattedTime,
processAppDocuments,
isInfluxDbEnabled,
writeToInfluxWithRetry,
writeBatchToInfluxV1,
} from '../shared/utils.js';
/**
@@ -185,11 +185,11 @@ export async function storeHealthMetricsV1(serverTags, body) {
];
// Write to InfluxDB v1 using node-influx library with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints(datapoint),
await writeBatchToInfluxV1(
datapoint,
`Health metrics for ${serverTags.server_name}`,
'v1',
serverTags.server_name
serverTags.server_name,
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose(

View File

@@ -1,5 +1,5 @@
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
/**
* Post log event to InfluxDB v1
@@ -217,11 +217,11 @@ export async function storeLogEventV1(msg) {
);
// Write with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints(datapoint),
await writeBatchToInfluxV1(
datapoint,
`Log event from ${msg.source}`,
'v1',
msg.host
msg.host,
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('LOG EVENT V1: Sent log event data to InfluxDB');

View File

@@ -1,5 +1,5 @@
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
/**
* Store user event queue metrics to InfluxDB v1
@@ -79,11 +79,11 @@ export async function storeUserEventQueueMetricsV1() {
}
// Write with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints([point]),
await writeBatchToInfluxV1(
[point],
'User event queue metrics',
'v1',
''
'',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('USER EVENT QUEUE METRICS V1: Sent queue metrics data to InfluxDB');
@@ -174,11 +174,11 @@ export async function storeLogEventQueueMetricsV1() {
}
// Write with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints([point]),
await writeBatchToInfluxV1(
[point],
'Log event queue metrics',
'v1',
''
'',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('LOG EVENT QUEUE METRICS V1: Sent queue metrics data to InfluxDB');

View File

@@ -1,5 +1,5 @@
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
/**
* Posts proxy sessions data to InfluxDB v1.
@@ -48,11 +48,11 @@ export async function storeSessionsV1(userSessions) {
// Data points are already in InfluxDB v1 format (plain objects)
// Write array of measurements with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints(userSessions.datapointInfluxdb),
await writeBatchToInfluxV1(
userSessions.datapointInfluxdb,
`Proxy sessions for ${userSessions.host}/${userSessions.virtualProxy}`,
'v1',
userSessions.serverName
userSessions.serverName,
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(

View File

@@ -1,5 +1,5 @@
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV1 } from '../shared/utils.js';
/**
* Posts a user event to InfluxDB v1.
@@ -88,11 +88,11 @@ export async function storeUserEventV1(msg) {
);
// Write with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.writePoints(datapoint),
await writeBatchToInfluxV1(
datapoint,
'User event',
'v1',
msg.host
msg.host,
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('USER EVENT V1: Sent user event data to InfluxDB');

View File

@@ -1,6 +1,6 @@
import { Point } from '@influxdata/influxdb-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
/**
* Posts Butler SOS memory usage metrics to InfluxDB v2.
@@ -52,27 +52,13 @@ export async function storeButlerMemoryV2(memory) {
);
// Write to InfluxDB with retry logic
await writeToInfluxWithRetry(
async () => {
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
flushInterval: 5000,
maxRetries: 0,
});
try {
await writeApi.writePoint(point);
await writeApi.close();
} catch (err) {
try {
await writeApi.close();
} catch (closeErr) {
// Ignore close errors
}
throw err;
}
},
await writeBatchToInfluxV2(
[point],
org,
bucketName,
'Memory usage metrics',
'v2',
''
'',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('MEMORY USAGE V2: Sent Butler SOS memory usage data to InfluxDB');

View File

@@ -1,6 +1,6 @@
import { Point } from '@influxdata/influxdb-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
import { applyInfluxTags } from './utils.js';
/**
@@ -75,27 +75,13 @@ export async function storeEventCountV2() {
}
// Write to InfluxDB with retry logic
await writeToInfluxWithRetry(
async () => {
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
flushInterval: 5000,
maxRetries: 0,
});
try {
await writeApi.writePoints(points);
await writeApi.close();
} catch (err) {
try {
await writeApi.close();
} catch (closeErr) {
// Ignore close errors
}
throw err;
}
},
await writeBatchToInfluxV2(
points,
org,
bucketName,
'Event count metrics',
'v2',
''
'',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('EVENT COUNT V2: Sent event count data to InfluxDB');
@@ -179,27 +165,13 @@ export async function storeRejectedEventCountV2() {
}
// Write to InfluxDB with retry logic
await writeToInfluxWithRetry(
async () => {
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
flushInterval: 5000,
maxRetries: 0,
});
try {
await writeApi.writePoints(points);
await writeApi.close();
} catch (err) {
try {
await writeApi.close();
} catch (closeErr) {
// Ignore close errors
}
throw err;
}
},
await writeBatchToInfluxV2(
points,
org,
bucketName,
'Rejected event count metrics',
'v2',
''
'',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('REJECTED EVENT COUNT V2: Sent rejected event count data to InfluxDB');

View File

@@ -1,6 +1,6 @@
import { Point } from '@influxdata/influxdb-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
import { applyInfluxTags } from './utils.js';
/**
@@ -216,27 +216,13 @@ export async function storeLogEventV2(msg) {
globals.logger.silly(`LOG EVENT V2: Influxdb datapoint: ${JSON.stringify(point, null, 2)}`);
// Write to InfluxDB with retry logic
await writeToInfluxWithRetry(
async () => {
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
flushInterval: 5000,
maxRetries: 0,
});
try {
await writeApi.writePoint(point);
await writeApi.close();
} catch (err) {
try {
await writeApi.close();
} catch (closeErr) {
// Ignore close errors
}
throw err;
}
},
await writeBatchToInfluxV2(
[point],
org,
bucketName,
`Log event for ${msg.host}`,
'v2',
msg.host
msg.host,
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('LOG EVENT V2: Sent log event data to InfluxDB');

View File

@@ -1,6 +1,6 @@
import { Point } from '@influxdata/influxdb-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV2 } from '../shared/utils.js';
import { applyInfluxTags } from './utils.js';
/**
@@ -74,27 +74,13 @@ export async function storeUserEventQueueMetricsV2() {
applyInfluxTags(point, configTags);
// Write to InfluxDB with retry logic
await writeToInfluxWithRetry(
async () => {
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
flushInterval: 5000,
maxRetries: 0,
});
try {
await writeApi.writePoint(point);
await writeApi.close();
} catch (err) {
try {
await writeApi.close();
} catch (closeErr) {
// Ignore close errors
}
throw err;
}
},
await writeBatchToInfluxV2(
[point],
org,
bucketName,
'User event queue metrics',
'v2',
'user-events-queue'
'user-events-queue',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('USER EVENT QUEUE METRICS V2: Sent queue metrics data to InfluxDB');
@@ -174,27 +160,13 @@ export async function storeLogEventQueueMetricsV2() {
applyInfluxTags(point, configTags);
// Write to InfluxDB with retry logic
await writeToInfluxWithRetry(
async () => {
const writeApi = globals.influx.getWriteApi(org, bucketName, 'ns', {
flushInterval: 5000,
maxRetries: 0,
});
try {
await writeApi.writePoint(point);
await writeApi.close();
} catch (err) {
try {
await writeApi.close();
} catch (closeErr) {
// Ignore close errors
}
throw err;
}
},
await writeBatchToInfluxV2(
[point],
org,
bucketName,
'Log event queue metrics',
'v2',
'log-events-queue'
'log-events-queue',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose('LOG EVENT QUEUE METRICS V2: Sent queue metrics data to InfluxDB');

View File

@@ -1,6 +1,6 @@
import { Point as Point3 } from '@influxdata/influxdb3-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
/**
* Posts Butler SOS memory usage metrics to InfluxDB v3.
@@ -48,11 +48,12 @@ export async function postButlerSOSMemoryUsageToInfluxdbV3(memory) {
try {
// Convert point to line protocol and write directly with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
await writeBatchToInfluxV3(
[point],
database,
'Memory usage metrics',
'v3',
'' // No specific server context for Butler memory
'butler-memory',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(`MEMORY USAGE V3: Wrote data to InfluxDB v3`);
} catch (err) {

View File

@@ -1,6 +1,6 @@
import { Point as Point3 } from '@influxdata/influxdb3-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
/**
* Store event count in InfluxDB v3
@@ -40,6 +40,8 @@ export async function storeEventCountInfluxDBV3() {
const database = globals.config.get('Butler-SOS.influxdbConfig.v3Config.database');
try {
const points = [];
// Store data for each log event
for (const logEvent of logEvents) {
const tags = {
@@ -80,13 +82,7 @@ export async function storeEventCountInfluxDBV3() {
point.setTag(key, tags[key]);
});
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
'Log event counts',
'v3',
'log-events'
);
globals.logger.debug(`EVENT COUNT INFLUXDB V3: Wrote log event data to InfluxDB v3`);
points.push(point);
}
// Loop through data in user events and create datapoints
@@ -129,15 +125,19 @@ export async function storeEventCountInfluxDBV3() {
point.setTag(key, tags[key]);
});
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
'User event counts',
'v3',
'user-events'
);
globals.logger.debug(`EVENT COUNT INFLUXDB V3: Wrote user event data to InfluxDB v3`);
points.push(point);
}
await writeBatchToInfluxV3(
points,
database,
'Event counts',
'event-counts',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(`EVENT COUNT INFLUXDB V3: Wrote event data to InfluxDB v3`);
globals.logger.verbose(
'EVENT COUNT INFLUXDB V3: Sent Butler SOS event count data to InfluxDB'
);
@@ -244,14 +244,13 @@ export async function storeRejectedEventCountInfluxDBV3() {
});
// Write to InfluxDB
for (const point of points) {
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
'Rejected event counts',
'v3',
'rejected-events'
);
}
await writeBatchToInfluxV3(
points,
database,
'Rejected event counts',
'rejected-event-counts',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(`REJECT LOG EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
globals.logger.verbose(

View File

@@ -5,7 +5,7 @@ import {
processAppDocuments,
isInfluxDbEnabled,
applyTagsToPoint3,
writeToInfluxWithRetry,
writeBatchToInfluxV3,
validateUnsignedField,
} from '../shared/utils.js';
@@ -239,13 +239,16 @@ export async function postHealthMetricsToInfluxdbV3(serverName, host, body, serv
for (const point of points) {
// Apply server tags to each point
applyTagsToPoint3(point, serverTags);
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
`Health metrics for ${host}`,
'v3',
serverName
);
}
await writeBatchToInfluxV3(
points,
database,
`Health metrics for ${host}`,
'health-metrics',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(`HEALTH METRICS V3: Wrote data to InfluxDB v3`);
} catch (err) {
// Track error count

View File

@@ -1,6 +1,6 @@
import { Point as Point3 } from '@influxdata/influxdb3-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
/**
* Clean tag values for InfluxDB v3 line protocol
@@ -295,11 +295,12 @@ export async function postLogEventToInfluxdbV3(msg) {
}
}
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
await writeBatchToInfluxV3(
[point],
database,
`Log event for ${msg.host}`,
'v3',
msg.host
'log-events',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(`LOG EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);

View File

@@ -1,6 +1,6 @@
import { Point as Point3 } from '@influxdata/influxdb3-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
/**
* Store user event queue metrics to InfluxDB v3
@@ -77,11 +77,12 @@ export async function postUserEventQueueMetricsToInfluxdbV3() {
}
}
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
await writeBatchToInfluxV3(
[point],
database,
'User event queue metrics',
'v3',
'user-events-queue'
'user-events-queue',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose(
@@ -171,11 +172,12 @@ export async function postLogEventQueueMetricsToInfluxdbV3() {
}
}
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
await writeBatchToInfluxV3(
[point],
database,
'Log event queue metrics',
'v3',
'log-events-queue'
'log-events-queue',
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.verbose(

View File

@@ -1,6 +1,6 @@
import { Point as Point3 } from '@influxdata/influxdb3-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
/**
* Posts proxy sessions data to InfluxDB v3.
@@ -39,14 +39,14 @@ export async function postProxySessionsToInfluxdbV3(userSessions) {
// The datapointInfluxdb array contains summary points and individual session details
try {
if (userSessions.datapointInfluxdb && userSessions.datapointInfluxdb.length > 0) {
for (const point of userSessions.datapointInfluxdb) {
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
`Proxy sessions for ${userSessions.host}/${userSessions.virtualProxy}`,
'v3',
userSessions.host
);
}
await writeBatchToInfluxV3(
userSessions.datapointInfluxdb,
database,
`Proxy sessions for ${userSessions.host}/${userSessions.virtualProxy}`,
userSessions.host,
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(
`PROXY SESSIONS V3: Wrote ${userSessions.datapointInfluxdb.length} datapoints to InfluxDB v3`
);

View File

@@ -1,6 +1,6 @@
import { Point as Point3 } from '@influxdata/influxdb3-client';
import globals from '../../../globals.js';
import { isInfluxDbEnabled, writeToInfluxWithRetry } from '../shared/utils.js';
import { isInfluxDbEnabled, writeBatchToInfluxV3 } from '../shared/utils.js';
/**
* Sanitize tag values for InfluxDB line protocol.
@@ -100,11 +100,12 @@ export async function postUserEventToInfluxdbV3(msg) {
// Write to InfluxDB
try {
// Convert point to line protocol and write directly with retry logic
await writeToInfluxWithRetry(
async () => await globals.influx.write(point.toLineProtocol(), database),
await writeBatchToInfluxV3(
[point],
database,
`User event for ${msg.host}`,
'v3',
msg.host
msg.host,
globals.config.get('Butler-SOS.influxdbConfig.maxBatchSize')
);
globals.logger.debug(`USER EVENT INFLUXDB V3: Wrote data to InfluxDB v3`);
} catch (err) {