Skip to content
  • There are no suggestions because the search field is empty.

Service Analyst Compliance

This article provides a comprehensive overview of the security measures and testing protocols implemented for Qminder’s Service Analyst feature.

Architecture

Qminder's Service Analyst uses a large language model (LLM). For LLM inference, we use the same existing sub-processors. We use AWS Bedrock with Anthropic's models for LLM inference.

When a user interacts with Service Analyst, the request is sent to Qminder's Kotlin backend. The backend provides LLM tools to retrieve additional data from the database. All tool calls are secured using the same methods as any other Qminder functionality. We use row-level security for database access.

The LLM query is sent to AWS Bedrock for inference. The data never leaves the AWS network, and AWS does not retain any inputs or outputs. After validating the LLM response, Qminder's Kotlin backend stores the inputs and outputs in Qminder's main database and returns the response to the end user.

Monitoring

To monitor LLM behavior, we use a self-hosted Langfuse cluster. We store all LLM inputs and outputs in Langfuse. Exported PDF and Excel files are not stored.

Only data engineers have access to Langfuse. Traces are retained for 7 days for monitoring and debugging purposes. After the 7-day retention period, traces are deleted from Langfuse.

Data retention

Langfuse traces are retained for 7 days.

Retention for LLM inputs and outputs, and exported PDF and Excel files, can be configured using Data Retention feature.

By default, there is no retention policy - data is retained as long as there is an Agreement between Qminder and the Client (see Terms of Service).

The Data Retention policy can be configured between 7 days and 24 months.

Client Data

All inputs, outputs, and exported files are considered Client Data. See the Terms of Service for how Client Data is handled.

Qminder does not use Client Data to train LLM models.