LLM-Driven Accessible Interface: A Model-Based Approach
TL;DR Highlight
An architectural proposal for automatically generating WCAG-compliant accessible UIs by combining UserProfile, declarative rules, and LLM.
Who Should Read
Frontend/full-stack developers building accessible UIs for users with disabilities in healthcare and public services. Technical architects designing content personalization pipelines with LLMs.
Core Mechanics
- A 5-layer architecture that separates UserProfile (accessibility attributes), declarative AdaptationRules, and LLM into distinct layers to automatically generate WCAG 2.2 and EN 301 549 compliant UIs
- LLM automatically selects and applies Plain-Language text, pictograms, and high-contrast layouts by reading cognitive and hearing impairment profiles
- 7 Derived Accessibility Requirements (DARs) create a 1:1 traceable chain from 'user need → adaptation rule → normative standard'
- A Quality Gate that automatically checks readability, semantic fidelity, and factual consistency after LLM output, triggering regeneration or Human-on-the-Loop (HoTL) review on failure
- The entire transformation process is recorded as auditable logs using SysML v2 (Systems Modeling Language) to generate regulatory compliance evidence
- A step-by-step medical instruction UI mockup integrated with a React-based renderer, implemented and validated through a post-medical consultation scenario
Evidence
- No quantitative user experiments — this is a proof-of-concept paper at prototype level; user evaluation is noted as future work
- The structure simultaneously satisfying four international standards — WCAG 2.2, EN 301 549, ISO 24495-1, and W3C COGA — is formalized using SysML v2 Requirement Diagrams
- A complete mapping table of condition → transformation function (e.g., simplifyText(), attachPictograms()) → normative standard is provided for each of the 7 DARs
- An end-to-end flow is implemented where the pipeline automatically generates a UI suited to a combined cognitive and hearing impairment profile in a post-medical consultation scenario
How to Apply
- By injecting UserProfile attributes (e.g., cognitiveSupport: true, auditoryExclusion: true) into LLM prompts and dynamically constructing prompts according to activated AdaptationRules, an accessibility transformation layer can be added to existing content management systems
- Immediately after LLM output, add a Quality Gate that checks readability metrics like Flesch Reading Ease plus semantic similarity to the source text (e.g., cosine similarity), and implement a fallback loop that triggers regeneration when thresholds are not met
- When converting medical, legal, or public service text to Plain Language, separately managing prompt templates that request 'adding pictogram descriptions' and 'numbered step structure' together ensures both reusability and auditability
Code Example
# Based on Section 4.4 of the paper — GenAIEngine prompt template example
prompt_template = """
[Instruction]: Simplify this medical note using Plain-Language guidelines (ISO 24495-1).
Add pictogram descriptions for key medical actions.
Structure the output as numbered steps.
[UserProfile]:
cognitiveSupport: {cognitive_support}
auditoryExclusion: {auditory_exclusion}
contrastMode: high
[AdaptationRules]:
- simplifyText() # DAR-01: Plain Language
- structureAsSteps() # DAR-02: Step-wise
- attachPictograms() # DAR-03: Pictograms
- disableAudio() # DAR-04: Visual-only
- applyHighContrast() # DAR-05: Contrast
[InputText]: {medical_note}
[OutputFormat]:
plain_text: "..."
pictogram_descriptions: ["...", "..."]
steps: ["Step 1: ...", "Step 2: ..."]
"""
# Usage example
formatted_prompt = prompt_template.format(
cognitive_support=True,
auditory_exclusion=True,
medical_note="Take Ibuprofen 400mg every 8 hours unless you experience gastric discomfort."
)
# Quality Gate — output validation
def quality_gate(output: dict, original: str) -> bool:
from textstat import flesch_reading_ease
from sentence_transformers import SentenceTransformer, util
# Readability check (Plain Language standard: 60+ recommended)
readability = flesch_reading_ease(output["plain_text"])
if readability < 60:
return False # Trigger regeneration
# Semantic fidelity check (cosine similarity >= 0.8)
model = SentenceTransformer("all-MiniLM-L6-v2")
similarity = util.cos_sim(
model.encode(original),
model.encode(output["plain_text"])
).item()
return similarity >= 0.8Terminology
Related Resources
Original Abstract (Expand)
The integration of Large Language Models (LLMs) into interactive systems opens new opportunities for adaptive user experiences, yet it also raises challenges regarding accessibility, explainability, and normative compliance. This paper presents an implemented model-driven architecture for generating personalised, multimodal, and accessibility-aligned user interfaces. The approach combines structured user profiles, declarative adaptation rules, and validated prompt templates to refine baseline accessible UI templates that conform to WCAG 2.2 and EN 301 549, tailored to cognitive and sensory support needs. LLMs dynamically transform language complexity, modality, and visual structure, producing outputs such as Plain-Language text, pictograms, and high-contrast layouts aligned with ISO 24495-1 and W3C COGA guidance. A healthcare use case demonstrates how the system generates accessible post-consultation medication instructions tailored to a user profile comprising cognitive disability and hearing impairment. SysML v2 models provide explicit traceability between user needs, adaptation rules, and normative requirements, ensuring explainable and auditable transformations. Grounded in Human-Centered AI (HCAI), the framework incorporates co-design processes and structured feedback mechanisms to guide iterative refinement and support trustworthy generative behaviour.