Event Logging
Requirements
- Completed the Server Setup Guide
- An enterprise organization
Azure Queue (Cloud)
The cloud instance of Bitwarden uses Azure Queue and Table Storage to handle events. Here's how this works:
- A user carries out an action which needs to be logged
- If the event is client-side (e.g. viewing a password), the client sends details of the event to
the Events server project, which then calls
EventService
. If the event is server-side, the relevant project callsEventService
itself. - The event is temporarily stored in Azure Queue Storage (which is designed for handling large numbers of messages)
- The EventsProcessor server project runs a regular batch job to retrieve events from Queue Storage and save them to Table Storage
- Events are retrieved from Table Storage for viewing
To emulate this locally:
-
Make sure you've installed and setup Azurite, as described in the Server Setup Guide
-
Make sure that the
globalSettings:events:connectionString
user secret is not set, or has the default value ofUseDevelopmentStorage=true
-
Start the Events and EventsProcessor projects using
dotnet run
or your IDE. (Also ensure you have Api, Identity and your web vault running.)
You should now observe that your enterprise organization is logging events (e.g. when creating an item or inviting a user). These should appear in the Event Logs section of the organization vault.
Azure Storage Explorer lets you inspect the contents of your local Queue and Table Storage and is extremely useful for debugging.
Database storage (Self-hosted)
Self-hosted instances of Bitwarden use an alternative EventService
implementation to write event
logs directly to the Event
table in their database.
To use database storage for events:
- Run your local development server in a self-hosted configuration (Api, Identity and web vault)
- Start the Events project using
dotnet run
or your IDE (note: EventsProcessor is not required for self-hosted)
Distributed events (optional)
Events can be distributed via an AMQP messaging system. This messaging system enables new integrations to subscribe to the events. The system supports either RabbitMQ or Azure Service Bus.
Listener / Handler pattern
The goal of moving to distributed events is to build additional service integrations that consume events. To make it easy to support multiple AMQP services (RabbitMQ and Azure Service Bus), the act of listening to the stream of events is decoupled from the act of responding to an event.
Listeners
- One listener per communication platform (e.g. one for RabbitMQ, one for Azure Service Bus).
- Multiple instances can be configured to run independently, each with its own handler and subscription / queue.
- Perform all the aspects of setup / teardown, subscription, etc. for the messaging platform, but do not directly process any events themselves. Instead, they delegate to the handler with which they are configured.
Handlers
- One handler per integration (e.g. HTTP post or event database repository).
- Completely isolated from and know nothing of the messaging platform in use. This allows them to be freely reused across different communication platforms.
- Perform all aspects of handling an event.
- Allows them to be highly testable as they are isolated and decoupled from the more complicated aspects of messaging.
This combination allows for a configuration inside of Startup.cs
that pairs instances of the
listener service for the currently running messaging platform with any number of handlers. It also
allows for quick development of new handlers as they are focused only on the task of handling a
specific event.
RabbitMQ implementation
The RabbitMQ implementation adds a step that refactors the way events are handled when running
locally or self-hosted. Instead of writing directly to the Events
table via the
EventsRepository
, each event is broadcast to a RabbitMQ exchange. A new
RabbitMqEventListenerService
instance, configured with an EventRepositoryHandler
, subscribes to
the RabbitMQ exchange and writes to the Events
table via the EventsRepository
. The end result is
the same (events are stored in the database), but the addition of the RabbitMQ exchange allows for
other integrations to subscribe.
Additional handlers - each paired with their own RabbitMqEventListenerService
and listening to
their own queue - are available to be configured as well.
SlackEventHandler
posts messages to Slack channels or DMs.WebhookEventHandler
POST
s each event to a configurable URL.
Running the RabbitMQ container
-
Verify that you've set a username and password in the
.env
file (see.env.example
for an example) -
Use Docker Compose to run the container with your current settings:
docker compose --profile rabbitmq up -d
- The compose configuration uses the username and password from the
env
file. - It is configured to run on localhost with RabbitMQ's standard ports, but this can be customized in the Docker configuration.
- The compose configuration uses the username and password from the
-
To verify this is running, open
http://localhost:15672
in a browser and login with the username and password in your.env
file.
Configuring the server to use RabbitMQ for events
-
Add the following to your
secrets.json
file, changing the defaults to match your.env
file:"eventLogging": {
"rabbitMq": {
"hostName": "localhost",
"username": "bitwarden",
"password": "SET_A_PASSWORD_HERE_123",
"exchangeName": "events-exchange",
"eventRepositoryQueueName": "events-write-queue",
"slackQueueName": "events-slack-queue",
"webhookQueueName": "events-webhook-queue",
}
}infoThe
slackQueueName
andwebhookQueueName
specified above are optional. If they are not defined, the system will use the above default names. -
Re-run the PowerShell script to add these secrets to each Bitwarden project:
pwsh setup_secrets.ps1
-
Start (or restart) all of your projects to pick up the new settings
With these changes in place, you should see the database events written as before, but you'll also see in the RabbitMQ management interface that the messages are flowing through the configured exchange/queues.
Azure Service Bus implementation
The Azure Service Bus implementation is a configurable replacement for Azure Queue. Instead of
writing Events to the queue to be picked up, they are sent to the configured service bus topic. An
instance of AzureServiceBusEventListenerService
is then configured with the
AzureTableStorageEventHandler
to subscribe to that topic and write Events to Azure Table Storage.
Similar to RabbitMQ above, the end result is the same (events are stored in Azure Table Storage),
but the addition of the service bus topic allows for other integrations to subscribe.
As with the RabbitMQ implementation above, a SlackEventHandler
and WebhookEventHandler
can be
configured to publish events to Slack and/or a webhook.
Running the Azure Service Bus emulator
-
Make sure you have Azurite set up locally (as per the normal instructions for writing events to Azure Table Storage). In addition, this assumes you're using the
mssql
default profile and have the${MSSQL_PASSWORD}
set via.env
. -
Run Docker Compose to add/start the local emulator:
docker compose --profile servicebus up -d
infoThe service bus emulator waits 15 seconds before starting. You can check the console in Docker desktop or run
docker logs service-bus
to verify the service is up before launching the server.
Configuring the server to use Azure Service Bus for events
-
Add the following to your
secrets.json
indev
to configure the service bus:"eventLogging": {
"azureServiceBus": {
"connectionString": "\"Endpoint=sb://localhost;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;\"",
"topicName": "event-logging",
"eventRepositorySubscriptionName": "events-write-subscription",
"slackSubscriptionName": "events-slack-subscription",
"webhookSubscriptionName": "events-webhook-subscription",
}
},infoThe
slackSubscriptionName
andwebhookSubscriptionName
specified above are optional. If they are not defined, the system will use the above default names. -
Re-run the secrets script to publish the new secrets
pwsh setup_secrets.ps1 -clear
-
Start or re-start all services, including
EventsProcessor
.
Integrations and integration configurations
Organizations can configure integration configurations to send events to different endpoints -- each handler maps to a specific integration and checks for the configuration when it receives an event. Currently, there are integrations / handlers for Slack and webhooks (as mentioned above).
OrganizationIntegration
-
The top level object that enables a specific integration for the organization.
-
Includes any properties that apply to the entire integration across all events.
-
For Slack, it consists of the token:
{ "token": "xoxb-token-from-slack" }
-
For webhooks, it is
null
. However, even though there is no configuration, an organization must have a webhookOrganizationIntegration
to enable configuration viaOrganizationIntegrationConfiguration
.
-
OrganizationIntegrationConfiguration
-
This contains the configurations specific to each
EventType
for the integration. -
Configuration
contains the event-specific configuration.-
For Slack, this would contain what channel to send the message to:
{ "channelId": "C123456" }
-
For Webhook, this is the URL the request should be sent to:
{ "url": "https://api.example.com" }
-
-
Template
contains a template string that is expected to be filled in with the contents of the actual event.- The tokens in the string are wrapped in
#
characters. For instance, the UserId would be#UserId#
- The
IntegrationTemplateProcessor
does the actual work of replacing these tokens with introspected values from the providedEventMessage
. - The template does not enforce any structure -- it could be a freeform text message to send via Slack, or a JSON body to send via webhook; it is simply stored and used as a string for the most flexibility.
- The tokens in the string are wrapped in