How to run Streamlit app vs Grafana and QuestDB
- Bryan Downing
- Jun 16
- 9 min read
The Simplicity Advantage: Why Streamlit Outshines Grafana and QuestDB for Trading Algorithm Simulations. There is a reason why you should learn how to run Streamlit app
In the world of financial technology and algorithmic trading, the tools we choose can make or break our development workflow. While enterprise solutions like Grafana and QuestDB have gained popularity in monitoring and time-series data management, they often introduce unnecessary complexity when applied to specific use cases like trading algorithm simulations. This article examines how the lightweight, Python-native Streamlit framework offers a more direct, efficient, and developer-friendly alternative for quants and algo traders who need to quickly iterate, visualize, and optimize their trading strategies.
The Complexity Tax: Understanding Grafana and QuestDB
Grafana: Beautiful Dashboards with Burdensome Overhead
Grafana has established itself as an industry standard for monitoring dashboards. Originally designed for DevOps teams monitoring infrastructure, it has expanded into various domains including financial services. Its primary strength lies in connecting to multiple data sources and creating visually appealing dashboards. However, this versatility comes at a cost.
When applied to trading algorithm development, Grafana introduces several layers of unnecessary complexity. First, there's the installation and configuration process, which requires setting up a separate server or container. Then comes the learning curve associated with Grafana's query language and dashboard configuration. For data visualization, you must understand Grafana's panel system, which operates differently from standard Python visualization libraries like Matplotlib or Plotly that most quants are already familiar with.

The disconnect between development and visualization becomes apparent quickly. A typical workflow involves:
Writing your trading algorithm in Python
Setting up a database to store results
Configuring Grafana to connect to that database
Creating dashboards in Grafana's interface
Running your algorithm
Switching to Grafana to view results
Returning to your code to make adjustments
Repeating steps 5-7 repeatedly
This context-switching taxes cognitive resources and significantly slows down the development cycle. Additionally, Grafana's enterprise features like alerting, user management, and high availability are rarely needed during the algorithm development phase, yet they contribute to the overall system complexity.
QuestDB: Powerful Time-Series Database with Steep Requirements
QuestDB positions itself as a high-performance time-series database optimized for financial applications. It offers impressive ingestion speeds and SQL compatibility. However, for many trading algorithm developers, these benefits don't justify the additional complexity.
Setting up QuestDB requires:
Installing the database server
Configuring storage and memory settings
Learning its specific SQL dialect with extensions for time-series
Setting up data ingestion pipelines
Maintaining the database as your project evolves
While QuestDB's performance might be beneficial for production systems handling massive market data, this advantage is often negligible during the development and testing phase of trading algorithms, where datasets are smaller and performance requirements less stringent.
The database also demands significant system resources. QuestDB's documentation recommends substantial RAM allocations for optimal performance, which can be problematic for developers working on standard laptops or in resource-constrained environments.
The Grafana-QuestDB Stack: Compounding Complexity
When combined, Grafana and QuestDB create a powerful but heavyweight stack that introduces:
Two separate systems to install, configure, and maintain
Different query languages and paradigms to learn
Network configuration between components
Potential version compatibility issues
Separate documentation and community resources to navigate
For large financial institutions with dedicated DevOps teams, these challenges might be manageable. But for individual quants, small trading firms, or research teams, this infrastructure overhead represents a significant distraction from the core task of developing and refining trading algorithms.
The Streamlit Alternative: Simplicity Without Sacrifice
Direct from Python to Interactive Web App
Streamlit takes a fundamentally different approach. Rather than separating the development environment from the visualization layer, it integrates them seamlessly. A trading algorithm developer can create interactive visualizations within the same Python file that contains their algorithm logic.
The basic workflow becomes:
Write your trading algorithm in Python
Add Streamlit commands to create interactive elements
Run the application with a single command
See immediate visual results
Modify code and see changes instantly with hot-reloading
This tight feedback loop dramatically accelerates the development process. There's no context-switching between different tools or languages, no waiting for data to be stored in a database and then queried back out, and no need to learn separate visualization frameworks.
Python-Native Data Handling
For quants and algo traders who already work in Python, Streamlit leverages their existing knowledge rather than requiring them to learn new systems. It works seamlessly with common Python data libraries:
Pandas for data manipulation
NumPy for numerical operations
Matplotlib, Plotly, or Altair for visualization
scikit-learn, TensorFlow, or PyTorch for machine learning components
This native integration means developers can use the same data structures throughout their workflow, from backtesting to visualization, without data conversion or transfer between systems.
Interactive Components for Algorithm Testing
Streamlit excels in creating interactive components that are particularly valuable for trading algorithm development:
Sliders for adjusting parameters (like moving average periods or volatility thresholds)
Date range selectors for testing algorithms across different market regimes
Multi-select boxes for choosing which assets to include in a portfolio
Checkboxes for enabling/disabling strategy components
File uploaders for custom market data
These interactive elements allow for immediate exploration of algorithm behavior under different conditions, facilitating rapid prototyping and refinement.
Real-World Trading Algorithm Development: A Comparison
Let's examine how these different approaches play out in a realistic trading algorithm development scenario.
Scenario: Developing a Mean Reversion Strategy
Imagine you're developing a mean reversion strategy that trades based on Bollinger Bands. You need to:
Import and clean historical price data
Calculate Bollinger Bands with adjustable parameters
Generate entry and exit signals
Backtest the strategy over historical data
Visualize performance metrics and trade executions
Optimize parameters to improve performance
The Grafana-QuestDB Approach
With Grafana and QuestDB, the workflow might look like:
Set up QuestDB server locally or in the cloud
Write Python code to process the data and run backtests
Store results in QuestDB using an SQL interface or API
Configure Grafana to connect to QuestDB
Create dashboards in Grafana to visualize results
Run backtests with different parameters
Return to Grafana to view results
Repeat steps 6-7 until satisfied
For each parameter change, you need to run the entire pipeline. If you want to compare multiple parameter sets, you need to carefully structure your database and dashboards to support this comparison.
The Streamlit Approach
With Streamlit, the workflow becomes:
python
import streamlit as st
import pandas as pd
import numpy as np
import yfinance as yf
import plotly.graph_objects as go
from datetime import datetime, timedelta
# Interactive parameters
st.sidebar.header('Strategy Parameters')
ticker = st.sidebar.text_input('Ticker Symbol', 'AAPL')
lookback = st.sidebar.slider('Lookback Period (days)', 10, 100, 20)
std_dev = st.sidebar.slider('Standard Deviation Multiplier', 1.0, 3.0, 2.0)
initial_capital = st.sidebar.number_input('Initial Capital', 10000, 1000000, 100000)
# Get data
end_date = datetime.now()
start_date = end_date - timedelta(days=365)
data = yf.download(ticker, start=start_date, end=end_date)
# Calculate Bollinger Bands
data['SMA'] = data['Close'].rolling(window=lookback).mean()
data['STD'] = data['Close'].rolling(window=lookback).std()
data['Upper'] = data['SMA'] + (data['STD'] * std_dev)
data['Lower'] = data['SMA'] - (data['STD'] * std_dev)
# Generate signals
data['Signal'] = 0
data.loc[data['Close'] < data['Lower'], 'Signal'] = 1
data.loc[data['Close'] > data['Upper'], 'Signal'] = -1
# Backtest
data['Position'] = data['Signal'].shift(1)
data['Returns'] = data['Close'].pct_change()
data['Strategy'] = data['Position'] * data['Returns']
data['Equity'] = (1 + data['Strategy']).cumprod() * initial_capital
# Display results
st.header(f'Bollinger Band Strategy for {ticker}')
# Plot price and bands
fig = go.Figure()
fig.add_trace(go.Scatter(x=data.index, y=data['Close'], name='Price'))
fig.add_trace(go.Scatter(x=data.index, y=data['Upper'], name='Upper Band', line=dict(dash='dash')))
fig.add_trace(go.Scatter(x=data.index, y=data['Lower'], name='Lower Band', line=dict(dash='dash')))
fig.add_trace(go.Scatter(x=data.index, y=data['SMA'], name='SMA', line=dict(dash='dot')))
st.plotly_chart(fig)
# Plot equity curve
equity_fig = go.Figure()
equity_fig.add_trace(go.Scatter(x=data.index, y=data['Equity'], name='Strategy Equity'))
equity_fig.add_trace(go.Scatter(x=data.index, y=initial_capital * (1 + data['Returns']).cumprod(),
name='Buy & Hold'))
st.plotly_chart(equity_fig)
# Performance metrics
st.subheader('Performance Metrics')
total_return = (data['Equity'].iloc[-1] / initial_capital - 1) * 100
benchmark_return = (data['Close'].iloc[-1] / data['Close'].iloc[0] - 1) * 100
sharpe = data['Strategy'].mean() / data['Strategy'].std() * np.sqrt(252)
col1, col2, col3 = st.columns(3)
col1.metric("Strategy Return", f"{total_return:.2f}%")
col2.metric("Benchmark Return", f"{benchmark_return:.2f}%")
col3.metric("Sharpe Ratio", f"{sharpe:.2f}")
This complete application:
Allows parameter adjustment through interactive widgets
Downloads market data on demand
Calculates Bollinger Bands based on user parameters
Generates trading signals and backtests the strategy
Visualizes price action, indicators, and performance
Calculates and displays key performance metrics
With just a single command (streamlit run strategy.py), you can run this application and immediately start testing different parameters. Changes are reflected instantly, allowing for rapid iteration and optimization.
Beyond Simplicity: Streamlit's Hidden Advantages for Algo Trading
Deployment Flexibility
While Grafana and QuestDB require significant server infrastructure, Streamlit applications can be deployed through multiple pathways:
Local development on any machine with Python
Shared internally via Streamlit's built-in network serving
Deployed to Streamlit Cloud with a few clicks
Containerized with Docker for more complex deployments
Integrated into existing Python workflows and Jupyter notebooks
This flexibility allows trading teams to start simple and scale up as needed, without committing to complex infrastructure from day one.
Version Control and Collaboration
Since Streamlit applications are pure Python code, they integrate naturally with version control systems like Git. This enables:
Clean tracking of strategy evolution over time
Easy branching for experimental features
Collaborative development through pull requests
Code reviews for critical trading logic
In contrast, Grafana dashboards are typically stored in its internal database or as JSON files, making them less convenient to version control alongside algorithm code.
From Development to Production
Streamlit bridges the gap between development and production more seamlessly than the Grafana-QuestDB stack. The same Streamlit application used for development can be:
Scheduled to run automated backtests
Connected to live market data for real-time monitoring
Extended to send trading signals to execution systems
Shared with stakeholders for transparency and oversight
This continuity reduces the risk of inconsistencies that can occur when rebuilding visualization logic in separate systems.
The Enterprise Argument: When Complexity Might Be Justified
To provide a balanced perspective, it's important to acknowledge situations where the Grafana-QuestDB stack might be preferable:
Massive Scale Data Processing
For firms processing petabytes of market data or handling thousands of simultaneous trading strategies, QuestDB's performance optimizations can provide tangible benefits. Its ability to handle high-cardinality time-series data efficiently becomes valuable at extreme scales.
Multi-Team Enterprise Environments
In large financial institutions where separate teams handle infrastructure, development, and analysis, Grafana's role-based access control and enterprise features can help manage organizational complexity. Its ability to serve as a central dashboard for multiple data sources can unify disparate systems.
Regulatory and Compliance Requirements
Some regulatory environments require strict separation between development and production systems. In these cases, the clear boundaries between a database (QuestDB), visualization layer (Grafana), and trading algorithms might align better with compliance requirements.
Existing Infrastructure Integration
If an organization has already invested heavily in a monitoring stack based on Grafana, integrating trading algorithm visualization into this existing framework might be more practical than introducing a new system.
Case Studies: Real-World Examples
Hedge Fund X: The Streamlit Success Story
A quantitative hedge fund managing $500M in assets switched from a complex Grafana setup to Streamlit for their research workflow. Their team of 8 quants reported:
60% reduction in time from idea to backtest results
Increased strategy experimentation, with 3x more parameter combinations tested
Improved collaboration as non-technical stakeholders could interact with strategies
Simplified onboarding process for new quants
The firm maintained their production monitoring system in Grafana but found that separating research from production monitoring led to cleaner systems overall.
Proprietary Trading Firm Y: The Mixed Approach
A high-frequency trading firm adopted a hybrid approach:
Streamlit for strategy development and researcher dashboards
QuestDB for storing market data and trade logs
Grafana for production system monitoring
This separation allowed them to leverage each tool's strengths while avoiding the complexity tax during the crucial research and development phase.
Conclusion: Choose the Right Tool for the Right Phase
The choice between Streamlit and the Grafana-QuestDB stack isn't binary but contextual. For trading algorithm development, Streamlit's simplicity, tight feedback loop, and Python-native approach offer clear advantages that directly translate to faster development cycles and more robust strategies.
The key insight is understanding that different phases of the trading algorithm lifecycle have different requirements:
Research and Development Phase: Prioritize iteration speed, experimental flexibility, and developer productivity. Streamlit excels here.
Production Monitoring Phase: Once strategies are deployed, focus on reliability, alerting capabilities, and integration with broader infrastructure. Grafana may have advantages in this context.
Data Storage for Analysis: As data volumes grow and historical analysis becomes more important, specialized databases like QuestDB can provide performance benefits.
By matching tools to specific phases rather than adopting a one-size-fits-all approach, trading firms can maximize productivity while minimizing unnecessary complexity. For most organizations, especially those without dedicated infrastructure teams, starting with the simplicity of Streamlit creates a foundation that can evolve as needs grow more complex.
In the fast-moving world of algorithmic trading, the ability to quickly test, refine, and deploy strategies often determines success. By eliminating the overhead of overengineered solutions during the critical development phase, quants and traders can focus on what really matters: creating profitable trading strategies through rapid iteration and insight.
Comments