2:I[7012,["4765","static/chunks/4765-f5afdf8061f456f3.js","9856","static/chunks/9856-3b185291364d9bef.js","6687","static/chunks/app/docs/%5B...slug%5D/page-e07536548216bee4.js"],"MarkdownRenderer"] 4:I[9856,["4765","static/chunks/4765-f5afdf8061f456f3.js","9856","static/chunks/9856-3b185291364d9bef.js","6687","static/chunks/app/docs/%5B...slug%5D/page-e07536548216bee4.js"],""] 5:I[4126,[],""] 7:I[9630,[],""] 8:I[4278,["9856","static/chunks/9856-3b185291364d9bef.js","8172","static/chunks/8172-b3a2d6fe4ae10d40.js","3185","static/chunks/app/layout-2814fa5d15b84fe4.js"],"HeadingProvider"] 9:I[1476,["9856","static/chunks/9856-3b185291364d9bef.js","8172","static/chunks/8172-b3a2d6fe4ae10d40.js","3185","static/chunks/app/layout-2814fa5d15b84fe4.js"],"Header"] a:I[3167,["9856","static/chunks/9856-3b185291364d9bef.js","8172","static/chunks/8172-b3a2d6fe4ae10d40.js","3185","static/chunks/app/layout-2814fa5d15b84fe4.js"],"Sidebar"] b:I[7409,["9856","static/chunks/9856-3b185291364d9bef.js","8172","static/chunks/8172-b3a2d6fe4ae10d40.js","3185","static/chunks/app/layout-2814fa5d15b84fe4.js"],"PageFrame"] 3:T44ea, # What's New in Voice Mode v4.1 Voice Mode v4.1 is a major release that introduces a voice-first interface, advanced speech processing, healthcare integrations, and comprehensive Arabic/Quranic language support. ## Release Highlights - **Voice-First Input Bar**: Redesigned interface optimized for speech interaction - **Streaming Text Display**: Real-time response rendering with smooth animations - **Speaker Diarization**: Multi-speaker detection and attribution - **FHIR R4 Streaming**: Healthcare data integration with retry resilience - **Adaptive Audio Quality**: Dynamic bitrate adjustment based on network conditions - **Quranic Lexicon**: 662 terms including all 114 Surah names and Tajweed terminology --- ## Getting Started ### Using the Voice-First Input Bar 1. **Tap the microphone button** to start recording 2. **Speak naturally** - the system detects when you stop speaking 3. **Wait for transcription** - your speech appears as text 4. **View the response** - streamed in real-time with Arabic support ### Selecting a VAD Preset Choose the preset that matches your environment: | Environment | Recommended Preset | | ------------ | ------------------ | | Quiet room | Sensitive | | Home/Office | Balanced (default) | | Public/Noisy | Relaxed | To change: **Settings (โš™๏ธ) โ†’ Voice Settings โ†’ VAD Preset** ### Understanding the Indicators **Quality Badge (๐Ÿ“ถ):** - ๐ŸŸข **High**: Excellent connection (128kbps) - ๐ŸŸก **Medium**: Acceptable quality (64kbps) - ๐ŸŸ  **Low**: Degraded quality (32kbps) - ๐Ÿ”ด **Minimal**: Very poor connection (16kbps) **PHI Indicator (๐Ÿ›ก๏ธ):** - ๐ŸŸข Green: No sensitive health info detected - ๐ŸŸก Yellow: Potential PHI - review recommended - ๐Ÿ”ด Red: PHI detected - secure handling active ### Adjusting Thinking Feedback Control what happens while the AI processes your request: - **Tone**: Soft chime when processing starts/ends - **Haptic**: Gentle vibration feedback (mobile) - **Visual**: Pulsing indicator animation To configure: **Settings (โš™๏ธ) โ†’ Thinking Feedback** --- ## New Features ### Voice-First Input Bar The input interface has been completely redesigned to prioritize voice interaction: ``` โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ [๐ŸŽค Tap to Speak] โ”‚ [โŒจ๏ธ] โ”‚ [โš™๏ธ] โ”‚ โ”‚ โ”‚ โ”‚ "Ask me anything about the Quran..." โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ``` **Key Features:** - Large, prominent microphone button for easy tap-to-speak - Visual audio level indicator during recording - Keyboard toggle for text input when needed - Settings quick-access for VAD and voice preferences **Usage:** 1. Tap the microphone button to start recording 2. Speak your question naturally 3. Release or tap again to send 4. Watch the real-time transcription appear ### Streaming Text Display Responses now stream in real-time with smooth character-by-character rendering: **Features:** - Token-by-token streaming from the AI - Smooth CSS animations for text appearance - Markdown rendering with syntax highlighting - Citation links with hover previews - RTL support for Arabic text segments **Technical Details:** - Adaptive chunk sizing based on network latency - Automatic scroll-to-bottom during streaming - Graceful handling of connection interruptions ### Adaptive VAD Presets Voice Activity Detection now includes three presets for different environments: | Preset | Sensitivity | Best For | | ------------- | ----------- | --------------------------------------- | | **Sensitive** | High | Silent rooms, minimal background noise | | **Balanced** | Medium | Typical home/office environments | | **Relaxed** | Low | Public spaces, background conversations | **Settings Panel:** ``` Voice Settings โ”œโ”€โ”€ VAD Preset: [Balanced โ–ผ] โ”œโ”€โ”€ Auto-stop delay: 1.5s โ””โ”€โ”€ Push-to-talk: [ ] Enable ``` ### PHI Indicator & Routing Healthcare-compliant PHI (Protected Health Information) handling: **Visual Indicator:** - Green shield icon: No PHI detected - Yellow shield: Potential PHI, review recommended - Red shield: PHI detected, secure handling active **Routing Features:** - Automatic detection of 18 HIPAA identifiers - Secure channel routing for PHI content - Audit logging for compliance - User notification when PHI is detected ### RTL Support for Arabic Full right-to-left support for Arabic content: **Features:** - Automatic language detection - Bidirectional text rendering - RTL-aware UI layout - Arabic numeral support - Proper text alignment in mixed content **Toggle:** ``` Display Settings โ””โ”€โ”€ Text Direction: [Auto โ–ผ] / LTR / RTL ``` ### Unified Memory Context Cross-session context management for personalized interactions: **Capabilities:** - Session history persistence - User preference memory - Conversation threading - Context-aware follow-ups - Learning style adaptation --- ## Phase 3 Features ### Speaker Diarization Multi-speaker detection and attribution for group conversations: **Capabilities:** - Up to 4 concurrent speakers - Real-time speaker change detection - Speaker embedding extraction - Cross-session speaker re-identification - Confidence scoring per segment **Use Cases:** - Study circles with multiple participants - Teacher-student Q&A sessions - Family Quran recitation sessions **Technical Specs:** - Latency: <200ms for speaker change detection - Accuracy: >90% speaker attribution - Models: pyannote.audio segmentation ### FHIR R4 Streaming Healthcare data integration with enterprise-grade resilience: **Supported Resources:** - Patient demographics - Observations (vitals, lab results) - Conditions (diagnoses) - Medications - Allergies **Resilience Features:** - Exponential backoff retry (1s โ†’ 2s โ†’ 4s โ†’ 8s) - Circuit breaker pattern - Partial response handling - Connection pooling - Health check monitoring **Configuration:** ```yaml fhir: base_url: https://fhir.example.com/r4 timeout_ms: 5000 max_retries: 3 circuit_breaker_threshold: 5 ``` ### Adaptive Quality Controller Dynamic audio quality adjustment based on network conditions: **Quality Tiers:** | Tier | Bitrate | Sample Rate | Use Case | | ------- | ------- | ----------- | ----------------- | | High | 128kbps | 48kHz | Excellent network | | Medium | 64kbps | 24kHz | Standard network | | Low | 32kbps | 16kHz | Poor network | | Minimal | 16kbps | 8kHz | Very poor network | **Hysteresis Behavior:** - Upgrade: 10 consecutive good measurements - Downgrade: 3 consecutive poor measurements - Prevents quality oscillation **Metrics Tracked:** - RTT (Round Trip Time) - Packet loss percentage - Jitter - Available bandwidth --- ## Lexicon Expansion ### Quranic Arabic Lexicon (328 terms) Complete pronunciation coverage for Quranic content: **Surah Names (114):** All Surah names with accurate Modern Standard Arabic IPA: - ุงู„ูุงุชุญุฉ โ†’ /ส”alfaหtiฤงa/ - ุงู„ุจู‚ุฑุฉ โ†’ /ส”albaqara/ - ุขู„ ุนู…ุฑุงู† โ†’ /ส”aหl ส•imraหn/ - ... (all 114 Surahs) **Tajweed Terms (50+):** - ุชุฌูˆูŠุฏ โ†’ /tadส’wiหd/ - ุฅุฏุบุงู… โ†’ /ส”idษฃaหm/ - ุฅุฎูุงุก โ†’ /ส”ixfaหส”/ - ู‚ู„ู‚ู„ุฉ โ†’ /qalqala/ - ุบู†ุฉ โ†’ /ษฃunna/ - ู…ุฏ โ†’ /madd/ **Islamic Vocabulary (200+):** - Common phrases (Bismillah, Alhamdulillah) - Names of Allah - Prophet names - Ritual terminology - Theological concepts ### English Transliteration Lexicon (334 terms) Transliterated Surah names and Islamic terms for English TTS: **Multiple Spelling Variants:** - Tajweed / tajwid - Qur'an / Quran - Insha'Allah / Inshallah **IPA Approximations:** Closest English phonemes for Arabic sounds: - Al-Fatihah โ†’ /รฆl fษ‘หtiหhษ‘ห/ - Bismillah โ†’ /bษชsmษชl lษ‘ห/ --- ## Configuration Reference ### Voice Settings ```javascript // settings-panel.js configuration voiceSettings: { vadPreset: 'balanced', // 'sensitive' | 'balanced' | 'relaxed' autoStopDelay: 1500, // ms pushToTalk: false, language: 'ar', // 'ar' | 'en' | 'auto' rtlMode: 'auto' // 'auto' | 'ltr' | 'rtl' } ``` ### Feature Flags ```python # Feature flags for v4.1 VOICE_V4_INPUT_BAR = True VOICE_V4_STREAMING_TEXT = True VOICE_V4_SPEAKER_DIARIZATION = True VOICE_V4_FHIR_STREAMING = True VOICE_V4_ADAPTIVE_QUALITY = True VOICE_V4_PHI_ROUTING = True ``` --- ## Migration Guide ### From v4.0 to v4.1 **Breaking Changes:** - None - v4.1 is fully backward compatible **Recommended Updates:** 1. Enable new feature flags in configuration 2. Update client to use streaming text display 3. Configure VAD presets for your environment 4. Test lexicon pronunciations for Quranic content **New Dependencies:** ``` pyannote.audio>=3.0.0 # Speaker diarization fhir.resources>=7.0.0 # FHIR R4 support ``` --- ## Performance Metrics ### Latency Targets | Operation | Target | Measured | | ---------------------------- | ------- | -------- | | Voice input to transcription | <500ms | 320ms | | Speaker change detection | <200ms | 180ms | | Text streaming first token | <300ms | 250ms | | FHIR resource fetch | <1000ms | 650ms | | Quality tier switch | <100ms | 80ms | ### Resource Usage | Component | Memory | CPU | | ------------------- | ------ | --- | | Speaker diarization | +150MB | +5% | | FHIR client | +20MB | +2% | | Adaptive quality | +5MB | +1% | | Lexicon service | +10MB | <1% | --- ## Known Issues 1. **Speaker diarization accuracy**: May decrease with >4 simultaneous speakers 2. **FHIR timeout**: First request after idle may timeout (connection pool warming) 3. **RTL mixed content**: Complex bidirectional text may occasionally misalign --- ## Acknowledgments Voice Mode v4.1 was developed with contributions from: - Platform team for core infrastructure - Healthcare team for FHIR integration - Localization team for Arabic/RTL support - Community contributors for lexicon expansion --- ## UI Components Guide > **Screenshots**: For visual reference, see the annotated screenshots in `/docs/voice/screenshots/`: > > - `voice-input-bar-states.png` - VoiceFirstInputBar in idle, recording, processing states > - `streaming-text-rtl.png` - StreamingTextDisplay with RTL Arabic content > - `quality-badge-tiers.png` - QualityBadge showing all 4 quality levels > - `phi-indicator-states.png` - PHI indicator (green/yellow/red states) > - `vad-presets-panel.png` - VAD preset selection in settings panel ### VoiceFirstInputBar Component **Location:** `/var/www/quran/js/components/VoiceFirstInputBar.js` The primary interface for voice interaction in v4.1: ``` โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ โ”‚ "What would you like to learn about today?" โ”‚ โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”‚ โ”‚ โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ [ ๐ŸŽค ] โ”‚ โ† Tap to Speak โ”‚ โ”‚ โ”‚ Recording... โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ โ”‚ โ”‚ โ”‚ [ โŒจ๏ธ Text ] [ โš™๏ธ Settings ] [ โ“ Help ] โ”‚ โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ``` **States:** | State | Visual | Behavior | | ---------- | ---------------------- | --------------------------- | | Idle | Grey microphone | Tap to start recording | | Recording | Pulsing red + waveform | Real-time audio levels | | Processing | Spinning indicator | Transcription in progress | | Error | Red outline + message | Retry or switch to keyboard | **Props:** ```javascript handleSubmit(text)} vadPreset="balanced" language="ar" placeholder="Ask about any Surah..." showKeyboardToggle={true} /> ``` ### StreamingTextDisplay Component **Location:** `/var/www/quran/js/components/StreamingTextDisplay.js` Renders AI responses with real-time streaming: ``` โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ ๐Ÿค– Assistant 12:34 PM โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค โ”‚ โ”‚ โ”‚ Surah Al-Fatihah (ุงู„ูุงุชุญุฉ) is the opening chapter of โ”‚ โ”‚ the Quran. It consists of seven verses and is recited โ”‚ โ”‚ in every unit of prayer... โ”‚ โ”‚ โ”‚ โ”‚ **Key Themes:** โ”‚ โ”‚ โ€ข Praise of Allah (verses 1-4) โ”‚ โ”‚ โ€ข Request for guidance (verses 5-7) โ”‚ โ”‚ โ€ข The straight path (ุงู„ุตุฑุงุท ุงู„ู…ุณุชู‚ูŠู…)โ–ˆ โ”‚ โ”‚ โ†‘cursor โ”‚ โ”‚ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”‚ โ”‚ ๐Ÿ“– Source: Tafsir Ibn Kathir, Vol 1, p.23 [View โ†’] โ”‚ โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ``` **Features:** - Token-by-token rendering with cursor animation - Markdown support (bold, lists, code blocks) - RTL text detection and rendering - Citation card expansion on click - Copy/share buttons on completion **Props:** ```javascript setIsStreaming(false)} showCitations={true} enableRTL="auto" animationSpeed={30} // ms per character /> ``` ### QualityBadge Component **Location:** `/var/www/quran/js/components/QualityBadge.js` Displays current audio quality and network status: ``` Normal view: Expanded on hover/tap: โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ ๐Ÿ“ถ High โ”‚ โ”‚ ๐Ÿ“ถ High Quality โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ”‚ โ”‚ Bitrate: 128 kbps โ”‚ โ”‚ RTT: 45ms โ”‚ โ”‚ Packet Loss: 0.1% โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ``` **Quality Indicators:** | Badge | Color | Meaning | | ------- | ------ | -------------------- | | ๐Ÿ“ถ High | Green | Excellent connection | | ๐Ÿ“ถ Med | Yellow | Acceptable quality | | ๐Ÿ“ถ Low | Orange | Degraded quality | | ๐Ÿ“ถ Min | Red | Minimal quality mode | **Props:** ```javascript logQualityEvent(tier)} /> ``` ### Component Integration Example ```javascript // Main voice interface integration function VoiceInterface() { const [streaming, setStreaming] = useState(false); const [quality, setQuality] = useState("high"); return (
setStreaming(false)} showCitations={true} />
); } ``` --- ## Related Documentation - [Voice Mode Architecture](./voice-mode-v4-overview.md) - [Speaker Diarization Service](./speaker-diarization-service.md) - [FHIR Streaming Service](./fhir-streaming-service.md) - [Adaptive Quality Service](./adaptive-quality-service.md) - [Lexicon Service Guide](./lexicon-service-guide.md) --- **Release Date:** December 2024 **Version:** 4.1.0 **Status:** Production Ready 6:["slug","voice/whats-new-v4-1","c"] 0:["X7oMT3VrOffzp0qvbeOas",[[["",{"children":["docs",{"children":[["slug","voice/whats-new-v4-1","c"],{"children":["__PAGE__?{\"slug\":[\"voice\",\"whats-new-v4-1\"]}",{}]}]}]},"$undefined","$undefined",true],["",{"children":["docs",{"children":[["slug","voice/whats-new-v4-1","c"],{"children":["__PAGE__",{},[["$L1",["$","div",null,{"children":[["$","div",null,{"className":"mb-6 flex items-center justify-between gap-4","children":[["$","div",null,{"children":[["$","p",null,{"className":"text-sm text-gray-500 dark:text-gray-400","children":"Docs / Raw"}],["$","h1",null,{"className":"text-3xl font-bold text-gray-900 dark:text-white","children":"What's New in Voice Mode v4.1"}],["$","p",null,{"className":"text-sm text-gray-600 dark:text-gray-400","children":["Sourced from"," ",["$","code",null,{"className":"font-mono text-xs","children":["docs/","voice/whats-new-v4-1.md"]}]]}]]}],["$","a",null,{"href":"https://github.com/mohammednazmy/VoiceAssist/edit/main/docs/voice/whats-new-v4-1.md","target":"_blank","rel":"noreferrer","className":"inline-flex items-center gap-2 rounded-md border border-gray-200 dark:border-gray-700 px-3 py-1.5 text-sm text-gray-700 dark:text-gray-200 hover:border-primary-500 dark:hover:border-primary-400 hover:text-primary-700 dark:hover:text-primary-300","children":"Edit on GitHub"}]]}],["$","div",null,{"className":"rounded-lg border border-gray-200 dark:border-gray-800 bg-white dark:bg-gray-900 p-6","children":["$","$L2",null,{"content":"$3"}]}],["$","div",null,{"className":"mt-6 flex flex-wrap gap-2 text-sm","children":[["$","$L4",null,{"href":"/reference/all-docs","className":"inline-flex items-center gap-1 rounded-md bg-gray-100 px-3 py-1 text-gray-700 hover:bg-gray-200 dark:bg-gray-800 dark:text-gray-200 dark:hover:bg-gray-700","children":"โ† All documentation"}],["$","$L4",null,{"href":"/","className":"inline-flex items-center gap-1 rounded-md bg-gray-100 px-3 py-1 text-gray-700 hover:bg-gray-200 dark:bg-gray-800 dark:text-gray-200 dark:hover:bg-gray-700","children":"Home"}]]}]]}],null],null],null]},[null,["$","$L5",null,{"parallelRouterKey":"children","segmentPath":["children","docs","children","$6","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L7",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[null,["$","$L5",null,{"parallelRouterKey":"children","segmentPath":["children","docs","children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L7",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","notFoundStyles":"$undefined"}]],null]},[[[["$","link","0",{"rel":"stylesheet","href":"/_next/static/css/7f586cdbbaa33ff7.css","precedence":"next","crossOrigin":"$undefined"}]],["$","html",null,{"lang":"en","className":"h-full","children":["$","body",null,{"className":"__className_f367f3 h-full bg-white dark:bg-gray-900","children":[["$","a",null,{"href":"#main-content","className":"skip-to-content","children":"Skip to main content"}],["$","$L8",null,{"children":[["$","$L9",null,{}],["$","$La",null,{}],["$","main",null,{"id":"main-content","className":"lg:pl-64","role":"main","aria-label":"Documentation content","children":["$","$Lb",null,{"children":["$","$L5",null,{"parallelRouterKey":"children","segmentPath":["children"],"error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L7",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":[["$","title",null,{"children":"404: This page could not be found."}],["$","div",null,{"style":{"fontFamily":"system-ui,\"Segoe UI\",Roboto,Helvetica,Arial,sans-serif,\"Apple Color Emoji\",\"Segoe UI Emoji\"","height":"100vh","textAlign":"center","display":"flex","flexDirection":"column","alignItems":"center","justifyContent":"center"},"children":["$","div",null,{"children":[["$","style",null,{"dangerouslySetInnerHTML":{"__html":"body{color:#000;background:#fff;margin:0}.next-error-h1{border-right:1px solid rgba(0,0,0,.3)}@media (prefers-color-scheme:dark){body{color:#fff;background:#000}.next-error-h1{border-right:1px solid rgba(255,255,255,.3)}}"}}],["$","h1",null,{"className":"next-error-h1","style":{"display":"inline-block","margin":"0 20px 0 0","padding":"0 23px 0 0","fontSize":24,"fontWeight":500,"verticalAlign":"top","lineHeight":"49px"},"children":"404"}],["$","div",null,{"style":{"display":"inline-block"},"children":["$","h2",null,{"style":{"fontSize":14,"fontWeight":400,"lineHeight":"49px","margin":0},"children":"This page could not be found."}]}]]}]}]],"notFoundStyles":[]}]}]}]]}]]}]}]],null],null],["$Lc",null]]]] c:[["$","meta","0",{"name":"viewport","content":"width=device-width, initial-scale=1"}],["$","meta","1",{"charSet":"utf-8"}],["$","title","2",{"children":"What's New in Voice Mode v4.1 | Docs | VoiceAssist Docs"}],["$","meta","3",{"name":"description","content":"Complete feature summary for Voice Mode v4.1 release"}],["$","meta","4",{"name":"keywords","content":"VoiceAssist,documentation,medical AI,voice assistant,healthcare,HIPAA,API"}],["$","meta","5",{"name":"robots","content":"index, follow"}],["$","meta","6",{"name":"googlebot","content":"index, follow"}],["$","link","7",{"rel":"canonical","href":"https://assistdocs.asimo.io"}],["$","meta","8",{"property":"og:title","content":"VoiceAssist Documentation"}],["$","meta","9",{"property":"og:description","content":"Comprehensive documentation for VoiceAssist - Enterprise Medical AI Assistant"}],["$","meta","10",{"property":"og:url","content":"https://assistdocs.asimo.io"}],["$","meta","11",{"property":"og:site_name","content":"VoiceAssist Docs"}],["$","meta","12",{"property":"og:type","content":"website"}],["$","meta","13",{"name":"twitter:card","content":"summary"}],["$","meta","14",{"name":"twitter:title","content":"VoiceAssist Documentation"}],["$","meta","15",{"name":"twitter:description","content":"Comprehensive documentation for VoiceAssist - Enterprise Medical AI Assistant"}],["$","meta","16",{"name":"next-size-adjust"}]] 1:null