From Friction to Flow: Designing Smarter Dashboards with {bidux}
By Jeremy R. Winget in Blog
June 19, 2025
š
{bidux}
v0.1.0 is live on CRAN!
Users donāt just see your dashboard: They interpret it, navigate it, and act (or fail to act) based on it. Thatās not just design; itās cognition.
Learn more below or explore the repo: github.com/jrwinget/bidux
Why UX Is Too Often an Afterthought
In data analytics and dashboard development, we often get the logic right: The data is clean, the calculations are correct, and the visualizations are technically sound. But when the dashboard goes live, something breaks.
Users ignore insights. Or worse, they disengage entirely.
This isnāt a reflection of poor technical work. Itās a sign that something deeper is missing: A bridge between how we design interfaces and how people actually think.
Most developers arenāt trained in psychology or UX. Thatās not a failing; itās a gap in the pipeline. Users interpret our tools through human lenses: They carry cognitive limitations, rely on mental shortcuts, and make judgments shaped by emotion, context, and bias.
To design dashboards that not only work, but resonate, we need to design for the mind, not just the machine.
A New Starting Point: BID + {bidux}
The Behavior Insight Design (BID) framework offers a structured, evidence-based approach to building more intuitive, cognitively supportive dashboards. Developed at the intersection of behavioral science, data storytelling, and interface design, BID maps out five stages that reflect how users actually engage with information:
Each stage helps developers reduce friction, surface insight, and support better decision-making, even without a background in psychology or UX.
The {bidux}
R package makes this framework practical for developers. It provides a step-by-step workflow, concept dictionaries, and component suggestions that bring behavioral design directly into your Shiny development process.
Together, BID and {bidux}
help you turn psychological friction into flow.
What Makes BID Different?
Most design systems focus on aesthetics or layout heuristics. BID starts earlier by asking what your user needs to think, feel, and decide at each stage of interaction.
BID is grounded in cognitive psychology, decision science, and information processing theory. It doesnāt just tell you what to build; it explains why certain design choices succeed or fail, based on decades of empirical research.
And while other frameworks often isolate usability issues or user behavior, BID treats these mechanisms as dynamically linked. For instance:
- How cognitive load affects susceptibility to bias
- How early layout decisions shape later interpretations
- How interface design impacts individual and group coordination
{bidux}
brings this theory into practice, giving you the tools to identify friction points, document key decisions, and structure your dashboard with the userās cognition in mind.
The Five BID Stages (with {bidux}
Examples)
1ļøā£ Notice the Problem
Goal: Identify where users struggle cognitively, visually, or emotionally.
Most friction stems from overload: Too many filters, unclear hierarchies, or competing focal points.
# install.packages("bidux")
# library(bidux)
stg_notice <- bid_notice(
problem = "Users can't find the key metrics",
evidence = "70% of testers took >30s locating the primary KPI",
theory = "Visual Hierarchy"
)
Stage 1 (Notice) completed. (20% complete)
- Problem: Users can't find the key metrics
- Theory: Visual Hierarchy
- Evidence: 70% of testers took >30s locating the primary KPI
- Next: Use bid_interpret() for Stage 2
Try: Swapping dropdowns for grouped radio buttons; surfacing KPIs higher in the layout.
2ļøā£ Interpret the Userās Needs
Goal: Center your design around the core questions users are trying to answer.
Users donāt want every chart: They want clarity about what matters, and often more importantly, what to do about it.
stg_interpret <- bid_interpret(
stg_notice,
central_question = "Where are sales underperforming?",
data_story = list(hook = "...", tension = "...", resolution = "...")
)
Stage 2 (Interpret) completed.
- Central question: Where are sales underperforming?
- Your data story is almost complete (75%). Consider adding: context.
- Your central question is appropriately scoped.
- No user personas defined
Try: Structuring the app like a narrative; defining user personas to anchor your story.
3ļøā£ Structure the App
Goal: Organize layout and flow to reduce decision errors and guide attention.
stg_structure <- bid_structure(
stg_interpret,
layout = "dual_process",
concepts = c("principle_of_proximity", "default_effect")
)
Stage 3 (Structure) completed.
- Layout: dual_process
- Concepts: Principle of Proximity, Default Effect
- No accessibility considerations specified
Try: Grouping filters near relevant charts; using bslib::card_body(padding = 4)
for visual spacing.
4ļøā£ Anticipate User Behavior
Goal: Account for predictable biases in how users interpret and interact with data.
People often anchor on the first number they see, seek out confirming evidence, and interpret identical outcomes differently depending on how theyāre framed
stg_anticipate <- bid_anticipate(
stg_structure,
bias_mitigations = list(
anchoring = "Use multiple benchmarks",
framing = "Allow toggling between gain/loss framing"
)
)
Stage 4 (Anticipate) completed.
- Bias mitigations: 2 defined
- Interaction principles: 2 defined
- Key suggestions: anchoring mitigation: Always show reference points like previous period, budget, or industry average., framing mitigation: Toggle between progress (65% complete) and gap (35% remaining) framing., Consider also addressing these common biases: confirmation
Try: Showing both ā65% completeā and ā35% remainingā; providing scenario toggles to challenge assumptions.
5ļøā£ Validate & Empower the User
Goal: Reinforce clarity, support confidence, and enable action (individually or as a team).
stg_validate <- bid_validate(
stg_anticipate,
summary_panel = "Top 3 Insights",
collaboration = "Team annotation or export options"
)
Stage 5 (Validate) completed.
- Summary panel: Top 3 Insights
- Collaboration: Team annotation or export options
- Next steps: 5 items defined
- Validation stage is well-defined. Focus on implementation and user testing.
Try: Ending with a clear takeaway panel; adding next-step checklists or shareable reports.
Try It Out
Example flow:
library(bidux)
# Start with identifying a friction point
workflow <- bid_notice(
problem = "Too many filters",
evidence = "Users forget the currently selected options"
) |>
# Align the dashboard with user goals
bid_interpret(central_question = "Which products underperformed?") |>
# Apply layout and design structure
bid_structure(layout = "dual_process") |>
# Address predictable cognitive biases
bid_anticipate(
bias_mitigations = list(
anchoring = "Include prior year as reference",
confirmation_bias = "Show competing explanations"
)
) |>
# Reinforce clarity and enable collaboration
bid_validate(summary_panel = "Key takeaways with export")
Auto-suggested theory: Hick's Law (confidence: 90%)
Stage 1 (Notice) completed. (20% complete)
- Problem: Too many filters
- Theory: Hick's Law (auto-suggested)
- Evidence: Users forget the currently selected options
- Theory confidence: 90%
- Next: Use bid_interpret() for Stage 2
Stage 2 (Interpret) completed.
- Central question: Which products underperformed?
- Your data story has all key elements. Focus on making each component compelling and relevant.
- Your central question is appropriately scoped.
- No user personas defined
Stage 3 (Structure) completed.
- Layout: dual_process
- Concepts: Principle of Proximity
- No accessibility considerations specified
Stage 4 (Anticipate) completed.
- Bias mitigations: 2 defined
- Interaction principles: 2 defined
- Key suggestions: anchoring mitigation: Always show reference points like previous period, budget, or industry average., confirmation_bias mitigation: Consider how this bias affects user decisions., Consider also addressing these common biases: framing, confirmation
Stage 5 (Validate) completed.
- Summary panel: Key takeaways with export
- Collaboration: Enable team sharing and collaborative decision-...
- Next steps: 5 items defined
- Ensure summary panel includes actionable insights
Then explore suggestions:
bid_suggest_components(workflow, package = "bslib")
# A tibble: 4 Ć 7
package component description bid_stage_relevance cognitive_concepts use_cases
<chr> <chr> <chr> <chr> <chr> <chr>
1 bslib nav_panel Create tab⦠Stage 1,Stage 3 Cognitive Load Th⦠content ā¦
2 bslib accordion Implement ⦠Stage 1,Stage 3 Progressive Discl⦠FAQ sectā¦
3 bslib card Organize c⦠Stage 3,Stage 5 Principle of Prox⦠Content ā¦
4 bslib layout_c⦠Create fle⦠Stage 3 Visual Hierarchy,⦠precise ā¦
# ā¹ 1 more variable: relevance <dbl>
Learn More
- š¦ GitHub: jrwinget/bidux
- š Vignettes:
introduction-to-bid
,concepts-reference
,getting-started
- š Theory paper coming soon! Watch this space
Final Thoughts
Better dashboards arenāt just more attractive; theyāre more intelligible. They reduce unnecessary decisions, anticipate user confusion, and surface what matters.
The BID framework gives developers a behavioral lens. {bidux}
gives them the tools to act on it.
You donāt need a psychology degree to design with cognitive empathy, just the right questions and the right support. {bidux}
helps you ask both.