Generate synthetic dialogues programatically for any topic

Learn Any Business Domain Through Conversations
llm, design, ideation
Published

June 1, 2026

Recently, I had a niche startup idea: helping elderly people navigate doctor appointments in the public health system in Poland. As a startupper, I should have found some ppl interested in such a service, and have interviews with them to learn more about their needs. But for a start I decided to use LLMs to generate me some synthetic conversations first to learn a bit more about the customer and his needs. This post walks through how I built this dialogue generation. This is all code driven, you’ll need appropriate API keys to replicate it for your use case (TODO provide links to registration and key generation).

Think of it as synthetic field research - you get the insights from number of conversations. This is cheating, and quality of those may vary depending on the area (if model has some information about things you want to discover, it may be useful for you). And because it’s LLM-generated, you can explore scenarios that are rare, uncomfortable to ask about, or haven’t happened yet.

%pip install pydantic instructor openai dotenv
Requirement already satisfied: pydantic in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (2.12.5)
Requirement already satisfied: instructor in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (1.14.5)
Requirement already satisfied: openai in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (2.16.0)
Requirement already satisfied: dotenv in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (0.9.9)
Requirement already satisfied: annotated-types>=0.6.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from pydantic) (0.7.0)
Requirement already satisfied: pydantic-core==2.41.5 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from pydantic) (2.41.5)
Requirement already satisfied: typing-extensions>=4.14.1 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from pydantic) (4.15.0)
Requirement already satisfied: typing-inspection>=0.4.2 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from pydantic) (0.4.2)
Requirement already satisfied: aiohttp<4.0.0,>=3.9.1 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (3.13.3)
Requirement already satisfied: diskcache>=5.6.3 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (5.6.3)
Requirement already satisfied: docstring-parser<1.0,>=0.16 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (0.17.0)
Requirement already satisfied: jinja2<4.0.0,>=3.1.4 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (3.1.6)
Requirement already satisfied: jiter<0.12,>=0.6.1 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (0.11.1)
Requirement already satisfied: requests<3.0.0,>=2.32.3 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (2.32.5)
Requirement already satisfied: rich<15.0.0,>=13.7.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (14.3.2)
Requirement already satisfied: tenacity<10.0.0,>=8.2.3 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (9.1.2)
Requirement already satisfied: typer<1.0.0,>=0.9.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from instructor) (0.21.1)
Requirement already satisfied: anyio<5,>=3.5.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from openai) (4.12.1)
Requirement already satisfied: distro<2,>=1.7.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from openai) (1.9.0)
Requirement already satisfied: httpx<1,>=0.23.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from openai) (0.28.1)
Requirement already satisfied: sniffio in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from openai) (1.3.1)
Requirement already satisfied: tqdm>4 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from openai) (4.67.2)
Requirement already satisfied: python-dotenv in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from dotenv) (1.2.1)
Requirement already satisfied: aiohappyeyeballs>=2.5.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->instructor) (2.6.1)
Requirement already satisfied: aiosignal>=1.4.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->instructor) (1.4.0)
Requirement already satisfied: attrs>=17.3.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->instructor) (25.4.0)
Requirement already satisfied: frozenlist>=1.1.1 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->instructor) (1.8.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->instructor) (6.7.1)
Requirement already satisfied: propcache>=0.2.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->instructor) (0.4.1)
Requirement already satisfied: yarl<2.0,>=1.17.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from aiohttp<4.0.0,>=3.9.1->instructor) (1.22.0)
Requirement already satisfied: idna>=2.8 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from anyio<5,>=3.5.0->openai) (3.11)
Requirement already satisfied: certifi in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from httpx<1,>=0.23.0->openai) (2026.1.4)
Requirement already satisfied: httpcore==1.* in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from httpx<1,>=0.23.0->openai) (1.0.9)
Requirement already satisfied: h11>=0.16 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai) (0.16.0)
Requirement already satisfied: MarkupSafe>=2.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from jinja2<4.0.0,>=3.1.4->instructor) (3.0.3)
Requirement already satisfied: charset_normalizer<4,>=2 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from requests<3.0.0,>=2.32.3->instructor) (3.4.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from requests<3.0.0,>=2.32.3->instructor) (2.6.3)
Requirement already satisfied: markdown-it-py>=2.2.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from rich<15.0.0,>=13.7.0->instructor) (4.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from rich<15.0.0,>=13.7.0->instructor) (2.19.2)
Requirement already satisfied: click>=8.0.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from typer<1.0.0,>=0.9.0->instructor) (8.3.1)
Requirement already satisfied: shellingham>=1.3.0 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from typer<1.0.0,>=0.9.0->instructor) (1.5.4)
Requirement already satisfied: mdurl~=0.1 in /home/misza222/Code/misza222.github.com/.venv/lib/python3.12/site-packages (from markdown-it-py>=2.2.0->rich<15.0.0,>=13.7.0->instructor) (0.1.2)
Note: you may need to restart the kernel to use updated packages.
from dotenv import load_dotenv
load_dotenv()
True
from pydantic import BaseModel, Field
from enum import Enum
from typing import List, Optional
import re
import helper as h
NR_OF_TURNS=15
class DialogueSpeaker(str, Enum):
    CUSTOMER_SERVICE = 'CUSTOMER_SERVICE' # customer service assistant
    CLIENT = 'CLIENT' # either person responsible for the eldery or disabled or payer

class DialogueTurn(BaseModel):
    speaker: DialogueSpeaker = Field(..., description="Person speaking")
    content: str = Field(..., description="Exact text of what the person said")

class DialogueScenario(BaseModel):
    title: str = Field(..., description="Scenario for the conversation")
    dialogue: List[DialogueTurn] = Field(..., description="Conversation")
SCENARIO_SYSTEM_PROMPT = f"""
# Customer Service Conversation Generator

You are generating a fictional but realistic in person customer discovery interviews in the spirit of lean startup methodology.

The problem that startupper found is lack of support for eldery and disabled patients within polish public health system.
There are often long queues and wait times, finding the right person to register with and doctors room for the visit may be 
challenging in large clinics, or simply patient has difficulty remembering and understanding because of dementia and relatives 
don't have time to assist him during the scheduled visit. There are also 

Ask open questions and validate assumptions.

Output Requirements: 

 - Return ONLY valid JSON.  
 - No explanations, no commentary, no markdown, no prose outside the JSON.
 - The JSON must strictly conform to the DialogueScenario schema.
 - All fields must be present and correctly typed.
 - The dialogue array must contain at least {NR_OF_TURNS} turns representing a natural conversation.
"""
scenarios_user_messages = {
    "Elderly Patient Unable to Navigate NFZ Clinic Alone": {
        "prompt": "Generate scenario for a customer discovery interview with an adult child describing how their elderly parent gets lost inside large NFZ clinics and cannot find the correct registration desk or doctor's room without assistance."
    },
    "Relative Overwhelmed by NFZ Queue Logistics": {
        "prompt": "Generate scenario for a customer discovery interview with a caretaker who explains the difficulty of managing long NFZ queues with an elderly patient who cannot stand for long periods or understand instructions and get's angry easily."
    },
    "Dementia Patient Missing Appointments": {
        "prompt": "Generate scenario for a customer discovery interview with a family member whose relative with dementia frequently forgets appointment times, required documents, or where to go inside the NFZ clinic."
    },
    "Working Professional Unable to Accompany Parent": {
        "prompt": "Generate scenario for a customer discovery interview with a busy professional who cannot take time off work to accompany their elderly parent to NFZ appointments, leading to missed or chaotic visits."
    },
}
def generate_dialogue_for_scenario(user_prompt, model="gpt-4o-mini"):
    messages = [
        {'role': 'system', 'content': SCENARIO_SYSTEM_PROMPT},
        {'role': 'user', 'content': user_prompt}
    ]
    
    try:
        result = h.call_api(messages, DialogueScenario, model=model)
    except Exception as e:
       print(f"Full input: {e}")
    
    return result
# for scenario_name, scenario_dict in scenarios_user_messages.items():
#     scenarios_user_messages[scenario_name]['results'] = generate_dialogue_for_scenario(scenario_dict['prompt'])
#     with open(f"data/scenarios_user_messages_{scenario_name}_results.json", "w") as f:
#         f.write(scenarios_user_messages[scenario_name]['results'].model_dump_json())

for scenario_name, scenario_dict in scenarios_user_messages.items():
    with open(f"data/scenarios_user_messages_{scenario_name}_results.json") as f:
        scenarios_user_messages[scenario_name]['results'] = DialogueScenario.model_validate_json(f.read())
for scenario_name, scenario_dict in scenarios_user_messages.items():
    print(f"# {scenario_name} {scenario_dict['results'].title}")
    for turn in scenario_dict['results'].dialogue:
        print(f"\t{turn.speaker.value.upper()}: {turn.content}")
# Elderly Patient Unable to Navigate NFZ Clinic Alone Interview with an Adult Child of an Elderly Patient
    CLIENT: Hi, I'm here to talk about my experience with my elderly parent visiting the local NFZ clinic.
    CUSTOMER_SERVICE: Of course! Can you tell me about what challenges your parent faces when visiting the clinic?
    CLIENT: Well, the first major issue is that the clinics are huge, and my parent often gets lost trying to find the registration desk.
    CUSTOMER_SERVICE: That sounds frustrating. How does your parent usually navigate through the clinic?
    CLIENT: They usually try to remember where things are, but with their memory issues, it becomes really difficult for them.
    CUSTOMER_SERVICE: I can imagine. Have they ever asked someone for help when they feel lost?
    CLIENT: Yes, sometimes they do, but it can be hard to find someone to ask, or they don’t want to bother others.
    CUSTOMER_SERVICE: What about signs or maps in the clinic? Do they find those helpful?
    CLIENT: Not really. The signs can be confusing and my parent often has a hard time understanding them.
    CUSTOMER_SERVICE: I see. And what happens when they finally get to the registration desk?
    CLIENT: The staff there are usually quite busy, and my parent might forget their appointment details or feel rushed.
    CUSTOMER_SERVICE: That sounds stressful. Do you ever accompany them to the appointments?
    CLIENT: I try to, but with my work schedule, it's hard to be there every time.
    CUSTOMER_SERVICE: What do you think would help make the process easier for your parent?
    CLIENT: Having someone available to guide them through the clinic would be ideal, or even a helper service for elderly patients.
    CUSTOMER_SERVICE: That sounds like a great idea. Is there anything else you think could improve their experience?
    CLIENT: More staff dedicated to helping patients navigate the clinic would make a huge difference.
# Relative Overwhelmed by NFZ Queue Logistics Customer Discovery Interview with a Caretaker of an Elderly Patient
    CLIENT: Hello, I'm here to talk about the experiences I've had taking care of my elderly father.
    CUSTOMER_SERVICE: Thank you for coming in today. Can you tell me more about the challenges you face when managing visits to the healthcare system?
    CLIENT: One of the biggest issues is the long queues at the NFZ clinics. My father often can't stand for long periods.
    CUSTOMER_SERVICE: I see, that must be very difficult for both of you. How does he react during these long waits?
    CLIENT: He gets frustrated easily, especially when he doesn't understand what’s happening or what he needs to do next.
    CUSTOMER_SERVICE: That sounds challenging. How do you manage those situations when he becomes angry?
    CLIENT: I try to remain calm and reassure him, but it can be tough. Sometimes he just doesn't want to cooperate anymore.
    CUSTOMER_SERVICE: Have you found any strategies or support that help during these visits?
    CLIENT: Not really. I often feel overwhelmed and wish there were more resources available to assist elderly patients like him.
    CUSTOMER_SERVICE: Is it difficult for you to navigate the system as well, in finding the right department or person?
    CLIENT: Yes, very much so. Sometimes we go to the wrong room and then have to start all over again.
    CUSTOMER_SERVICE: That sounds frustrating. How does this impact his healthcare access?
    CLIENT: It makes it harder for him to get the help he needs regularly. Sometimes we just give up altogether.
    CUSTOMER_SERVICE: What improvements do you think could be made to help you and others in similar situations?
    CLIENT: Having someone to assist with navigation would be helpful, maybe a dedicated caretaker for elderly patients.
    CUSTOMER_SERVICE: That’s a great suggestion. Would you be open to exploring solutions that could provide that support?
    CLIENT: Absolutely. Anything that makes it easier for us would be welcome.
# Dementia Patient Missing Appointments Customer Discovery Interview with a Family Member of a Patient with Dementia
    CLIENT: Hi, I'm really struggling to manage my relative's medical appointments. They often forget when and where to go.
    CUSTOMER_SERVICE: That sounds very challenging. Could you tell me more about the typical issues you face with their appointments?
    CLIENT: Sure. Sometimes they forget the time of the appointment completely, and often they don't know what documents they need to bring.
    CUSTOMER_SERVICE: I see. How do you currently remind them of their appointments?
    CLIENT: I try to call them or send them messages, but they often can't remember the information or get confused.
    CUSTOMER_SERVICE: That must be frustrating. Do they recognize the clinic when you arrive?
    CLIENT: Not always. Sometimes they forget the way or the name of the clinic, so we spend a lot of time just trying to find the right place.
    CUSTOMER_SERVICE: How do you handle it when they can't remember where to go?
    CLIENT: I usually have to go with them, but I have a busy schedule, and my siblings are also busy.
    CUSTOMER_SERVICE: Have you ever considered using any kind of reminders or technology to assist with this?
    CLIENT: I've thought about it, but I’m not sure which tools would work best for someone with dementia.
    CUSTOMER_SERVICE: What kind of support do you think would help you the most in this situation?
    CLIENT: Maybe having someone who could assist us at the clinic would be helpful. Just someone who knows what to do.
    CUSTOMER_SERVICE: That sounds like a great idea. Would you prefer this assistance to be available on-site at the clinic or remotely through a phone app or service?
    CLIENT: On-site would be best since they need physical help navigating.
    CUSTOMER_SERVICE: Thank you for sharing this with me. Is there anything else that you think could improve the experience for both you and your relative?
    CLIENT: Just having clearer communication from the clinic about what they need and when would really help as well.
# Working Professional Unable to Accompany Parent Customer Discovery Interview with a Busy Professional
    CLIENT: Hi, thank you for meeting with me today. I really appreciate your time.
    CUSTOMER_SERVICE: Of course! I'm glad to help. Can you tell me a bit about your situation with your elderly parent?
    CLIENT: Well, my father is quite frail and needs regular check-ups at the clinics, but my work schedule is so demanding.
    CUSTOMER_SERVICE: That sounds challenging. How often does he have appointments?
    CLIENT: Usually once a month, but sometimes it's more frequent if he feels unwell.
    CUSTOMER_SERVICE: Have you had any issues with getting him to those appointments?
    CLIENT: Yes, there are times when I can’t take time off work, and he ends up either missing the appointment or not having someone to support him.
    CUSTOMER_SERVICE: I see. How does he manage the visit alone when you can't accompany him?
    CLIENT: He gets confused sometimes, especially in large clinics where it's hard to find the right office.
    CUSTOMER_SERVICE: That must be stressful for both of you. Have you looked into any ways he could receive support?
    CLIENT: Not really. I don’t know where to start, and my relatives are also busy with their jobs.
    CUSTOMER_SERVICE: If there were a service that could help him navigate those appointments, would you be interested?
    CLIENT: Definitely! It would need to be reliable though, as I can't afford to have him get lost or miss a visit.
    CUSTOMER_SERVICE: Are there specific features you think would be necessary for such a service?
    CLIENT: Yes, maybe someone to assist him on-site, and a way for me to receive updates or manage appointments remotely.
    CUSTOMER_SERVICE: That sounds practical. How do you think your father would feel about having someone assist him?
    CLIENT: I think he would feel much more comfortable and less anxious knowing someone is there to help.

TTS

You can stop there but, it would be nice to actually hear it. For that it need to be augmented with TTS tags:

from elevenlabs import ElevenLabs
from dotenv import load_dotenv
from IPython.display import Audio
import os
client = ElevenLabs(
    api_key = os.getenv("ELEVENLABS_API_KEY")
)
VOICE_ENHACEMENT_FOR_11LABS_SYSTEM_PROPMT = """
Your task:

# Preserve the original meaning and intent of each utterance.

## Add ElevenLabs tags, such as:

- <voice emotion="..."> (e.g., empathetic, calm, encouraging, reflective, tense, relieved)
- <break time="...ms">
- <prosody rate="..." pitch="...">
- <emphasis>

## Enhance the dramatic flow of the conversation by:

- highlighting the client’s emotions,
- adding subtle pauses,
- marking moments of tension, relief, or reflection.

## Do not change the meaning, but you may:

- add natural pauses,
- add effects using tags such as
[laughs], [laughs harder], [starts laughing], [wheezing], [whispers], [sighs], [exhales], [sarcastic], [curious], [excited], [crying], [snorts], [mischievously],
- emphasize emotional transitions.

## Each CUSTOMER_SERVICE and CLIENT utterance must have its own block with tags.

## Do not add narration or commentary — only the dialogue.
"""

VOICE_ENHACEMENT_FOR_11LABS_USER_PROPMT = "Find below dialogues to enhance:"
def enhance_dialog_for_elevenlabs(dialog):
        user_prompt = VOICE_ENHACEMENT_FOR_11LABS_USER_PROPMT
        for turn in dialog:
            user_prompt += f"\t{turn.speaker.value.upper()}: {turn.content}\n"
        messages = [
            {'role': 'system', 'content': VOICE_ENHACEMENT_FOR_11LABS_SYSTEM_PROPMT},
            {'role': 'user', 'content': user_prompt}
        ]
        
        return h.call_api(messages, DialogueScenario)

for scenario_name, scenario_dict in scenarios_user_messages.items():
    filename = f"data/scenarios_user_messages_{scenario_name}_enhanced_results.json"
    if not os.path.isfile(filename):
        print(f"Processing scenario {scenario_name}")
        
        result = enhance_dialog_for_elevenlabs(scenario_dict['results'].dialogue)
        scenario_dict['results_for_voice'] = result
        
        with open(filename, "w") as f:
            f.write(scenarios_user_messages[scenario_name]['results_for_voice'].model_dump_json())
    else:
        with open(filename) as f:
            scenarios_user_messages[scenario_name]['results_for_voice'] = DialogueScenario.model_validate_json(f.read())

… and finally generate voice with elevenlabs (but you could use other TTS model like VibeVoice from Microsoft)

import io
from IPython.display import Audio
import json
import hashlib

def hash_dict(d: dict) -> str:
    # Convert dict to a canonical JSON string
    encoded = json.dumps(d, sort_keys=True).encode("utf-8")
    return hashlib.sha256(encoded).hexdigest()

def speak(text: str, voice_id: str, voice_settings: dict) -> bytes:
    audio_stream = client.text_to_speech.convert(
        voice_id=voice_id,
        voice_settings=voice_settings,
        output_format="mp3_44100_128",
        text=text,
        model_id="eleven_flash_v2_5"
    )
    return b"".join(audio_stream)


def speak_dialog(dialog: list[dict], voice_map: dict) -> Audio:
    """
    dialog = [
        {"speaker": "CUSTOMER_SERVICE", "text": "..."},
        {"speaker": "CLIENT", "text": "..."},
    ]

    voice_map = {
        "CUSTOMER_SERVICE": {"voice_id": "voice_id_1", voice_settings = {},}
        "CLIENT": {"voice_id": "voice_id_1", voice_settings = {},}
    }
    """

    dialog_hash = hash_dict(dialog)
    dialog_filename = f'data/{dialog_hash}.raw'
    
    if not os.path.isfile(dialog_filename):
        combined = b""
        
        for turn in dialog:
            speaker = turn["speaker"]
            text = turn["text"]
            voice_id = voice_map[speaker]['voice_id']
            voice_settings = voice_map[speaker]['voice_settings']
    
            audio_bytes = speak(text, voice_id, voice_settings)
    
            # Dodaj krótką pauzę między wypowiedziami (silence MP3)
            silence = b"\x00" * 4000  # ~2kB ciszy, działa w większości playerów
    
            combined += audio_bytes + silence
            print(".", end="")
            
        with open(dialog_filename, "wb") as f:
            f.write(combined)
    else:
        print("!!!CACHED!!!", end="")
    with open(dialog_filename, "rb") as f:
        combined = f.read()

    return Audio(combined, autoplay=True)
def generate_audio_for(scenario):
    dialog = []
    for turn in scenarios_user_messages[scenario]['results_for_voice'].dialogue:
        dialog.append({'speaker': turn.speaker.value.upper(), 'text': turn.content})
    
    voice_map = {
        "CUSTOMER_SERVICE": {
            "voice_id": "N0GCuK2B0qwWozQNTS8F", 
            'voice_settings': {
                "stability": 0.5, 
                "similarity_boost": 0.75,
                "style": 0.35
            }
        },
        "CLIENT": {
            'voice_id': "TxGEqnHWrfWFTfGW9XjX", 
            'voice_settings': {
                "stability": 0.3,
                "similarity_boost": 0.5,
                "style": 0.8,
                "use_speaker_boost": True
            }
        },
    }
    # print(voice_map)
    return speak_dialog(dialog, voice_map)
for dialog_title in list(scenarios_user_messages.keys()):
    print(dialog_title, end="")
    generate_audio_for(dialog_title)
    print("")
Elderly Patient Unable to Navigate NFZ Clinic Alone!!!CACHED!!!
Relative Overwhelmed by NFZ Queue Logistics!!!CACHED!!!
Dementia Patient Missing Appointments!!!CACHED!!!
Working Professional Unable to Accompany Parent!!!CACHED!!!
generate_audio_for(list(scenarios_user_messages.keys())[0])
!!!CACHED!!!

As you can see, this is relatively simple way to dive into synthetic conversations, and this is a good start to learning more about new domain that you don’t have experience with.