Beelzebub API v1
Overview
The Beelzebub Honeypot Framework provides a flexible system for creating and deploying various types of honeypots that simulate vulnerable services to detect and analyze potential attacks. The framework supports multiple protocols including HTTP, SSH, and TCP, with customizable response behaviors.
API Structure
All API configurations in the Beelzebub Honeypot Framework follow a common pattern:
apiVersion
: Specifies the API version (currently "v1")protocol
: Defines the protocol being emulated (http, ssh, tcp)address
: The network address and port to listen on (e.g., ":8080", ":22")description
: A human-readable description of the honeypot service
Protocol-Specific Configurations
HTTP Honeypot
HTTP honeypots simulate web servers and web applications, allowing for customized responses to incoming HTTP requests.
Sample Configuration:
Key Components:
commands
: Array of command configurationsregex
: Regular expression pattern to match incoming requestshandler
: Response content to returnheaders
: HTTP headers to include in the responsestatusCode
: HTTP status code to return
Plugin Support:
The HTTP protocol supports the LLMHoneypot plugin for AI-powered responses
SSH Honeypot
SSH honeypots simulate SSH servers, providing interactive command-line interfaces to attackers.
Sample Configuration (Standard):
Sample Configuration (LLM-powered):
Key Components:
commands
: Array of command patterns and responsesregex
: Regular expression to match user commandshandler
: Text output to display in response to the commandplugin
: Optional plugin name to handle the command (e.g., "LLMHoneypot")
serverVersion
: SSH server version string to displayserverName
: Server name to display (e.g., "ubuntu")passwordRegex
: Regular expression defining accepted passwordsdeadlineTimeoutSeconds
: Session timeout in seconds
Plugin Configuration:
For LLM-powered SSH honeypots:
llmProvider
: The LLM service provider (e.g., "openai", "ollama")llmModel
: The specific model to use (e.g., "gpt-4o")openAISecretKey
: API key for the LLM service
TCP Honeypot
TCP honeypots emulate various TCP-based services with customizable banners.
Sample Configuration:
Key Components:
banner
: Text sent to clients upon connectiondeadlineTimeoutSeconds
: Connection timeout in seconds
LLMHoneypot Plugin
The LLMHoneypot plugin provides AI-powered responses to attacker inputs using language models.
Compatibility: Only available for HTTP and SSH protocols.
Configuration Parameters:
llmProvider
: The AI service provider (currently supports "openai", "ollama")llmModel
: The language model to use (e.g., "gpt-4o")openAISecretKey
: Authentication key for the LLM service
Best Practices
Port Selection:
Use standard ports for better attacker engagement (e.g., :22 for SSH, :80/:443 for HTTP)
For multiple instances of the same protocol, use non-standard ports for additional honeypots
Response Configuration:
Create realistic command responses that mimic actual systems
Include deliberate vulnerabilities or information leaks to engage attackers
LLM Integration:
Use LLM-powered honeypots for more dynamic and convincing interactions
Configure appropriate timeouts to manage resource usage
Password Complexity:
Include common weak passwords in
passwordRegex
to attract brute force attemptsMix simple passwords with moderately complex ones for realistic representation
Session Management:
Set appropriate
deadlineTimeoutSeconds
based on expected interaction patternsLower timeouts (10-30 seconds) for simple services
Higher timeouts (60-120+ seconds) for interactive sessions
Implementation Examples
Basic HTTP Authentication Honeypot
Interactive SSH Honeypot with LLM
Database Service Honeypot
Last updated