UXit Documentation

Getting Started

Learn the basics of the UXit and how to get started with heuristic evaluations.

Important

UXit is in limited closed alpha testing and under active development. The contents in the documentation is subject to change as update are made.

Overview

UXit is for UX practitioners and teams who want a structured way to evaluate designs, track quality over time, and produce clear, evidence-based outputs for development. It connects guidelines, requirements, flows, evaluations, and analytics into a single workflow so testing, decision-making, and handoff are consistent, traceable, and fluid. As the platform evolves, additional capabilities will be added to support designers, researchers, developers, and project managers.

Platform Affordances

Systems

The following pages accessed via the sidebar are available for project planning and data analysis:

  • Guidelines: Define how you evaluate
  • Requirements: Define what must be true
  • Flows: Define what you evaluate
  • Evaluations: Run against Guidelines
  • Analytics: Track progress over time

Functions

Included functions to assist rapid user interaction and input include:

  • Fuzzy Search: Accessed via the search button in the sidebar.
  • User Settings: Accessed from the avatar menu in the sidebar.
  • Database Export: Accessed from the avatar menu in the sidebar.
  • Help Documentation: Accessed via the help button in the sidebar.

Your First Evaluation

Option 1: Full Requirements Structure

For new projects or when you need traceability through all design phases:

  1. Create Guidelines (mandatory)
  2. Create Requirements (project definition)
  3. Create Use Cases (break into scenarios)
  4. Add Acceptance Criteria (define testable behaviors)
  5. Import Use Case into Flows
  6. Run Evaluations
  7. Track Analytics over versions

Option 2: Direct to Flows

For teams with existing defined processes (like a checkout flow you want to evaluate):

  1. Create Guidelines (mandatory)
  2. Create a Flow (label + swimlane definition)
  3. Run Evaluations
  4. Track Analytics

Important

Running only one evaluation provides limited insight. The real value comes from comparing multiple evaluation runs (v1, v2, v3) to track improvements over time.

Step-by-Step

Step 1: Create Guidelines (Required)

Guidelines are the foundation for all evaluations. They define your quality standards and testing criteria.

What to do:

  • Create a new Guideline Set with categories (e.g., Usability, Accessibility, Performance)
  • Add binary non-opinionated criteria under each category (e.g., "All form errors are clearly visible.")
  • Save this guideline set

Tip

Every evaluation must test against a guideline set. Teams typically establish core guidelines once and reuse them across all evaluations.

Step 2: Choose Your Path

Option A: For Full Requirements Tracking

  • Create Requirements: Define your project scope and high-level objectives.
  • Create Use Cases: Break requirements into focused user scenarios (e.g., "Checkout process").
  • Add Acceptance Criteria: For each use case, define specific testable behaviors (e.g., "On submit, a confirmation modal displays.").

Option B: For Direct Evaluation

  • Create a Flow: If you already have a defined process, give it a label and define its swimlane/scope. No need to map through Requirements.

Step 3: Import or Create a Flow

A Flow is where evaluations happen. If you followed the Requirements path, import your Use Case as a Flow. If you created a standalone scenario, it's ready to evaluate.

Important

One flow = one focused user scenario that you'll evaluate multiple times to track progress.

Step 4: Run Your First Evaluation

Create an Evaluation against your Flow. Answer Pass/Fail/N/A for each question in your guideline set based on the flow. This is v1 of your evaluation.

Step 5: Run More Evaluations

Make design changes to your flow, then run v2 of the evaluation against the same guideline set.

Compare v1 → v2 → v3 to see progress over time. This is where the power comes in.

Step 6: View Analytics

After multiple evaluations, your Analytics dashboard shows:

  • How your score improved (or declined)
  • Which categories are strong/weak
  • Trends over time
  • Specific questions causing issues

Warning

Do not store classified, regulated, sensitive PII, or any data subject to HIPAA/SOC2 compliance requirements.

Best Practices for Success

  • Establish Guidelines upfront - Don't improvise during evaluations. Align your team on criteria first.
  • Keep flows focused - One flow = one scenario. Don't try to evaluate an entire product in one flow.
  • Run evaluations consistently - Use the same guideline set across versions to measure real progress.
  • Iterate and re-evaluate - The value comes from v1 → v2 → v3. One-off evaluations don't show trends.
  • Share results with development - Use Analytics outputs to justify design decisions in handoff.

On this page