That is HUGE! Thank you, @thegreekgeek@midwest.social! This makes customizing conversations from automations so much more powerful and flexible!
That is HUGE! Thank you, @thegreekgeek@midwest.social! This makes customizing conversations from automations so much more powerful and flexible!
Yes, @thegreekgeek@midwest.social, now knowing that I can use sentence syntax in automations, I have built 1 automation to handle my specific needs. But each trigger is a hardcoded value instead of a “variable”. For example, trigger 1 is “sentence = ‘what is the date of my birthday’” and I trigger an action conditionally to speak the value of input_date.event_1
because I know that’s where I stored the date for “my birthday”.
What would be awesome is your 2nd suggestion: passing the name of the input_date helper through to the response with a wildcard. I can’t figure out how to do that. I’ve tried defining and using slots but I just don’t understand the syntax. Which file do I define the slots in, and what is the syntax?
Thanks, @thegreekgeek@midwest.social. I didn’t know you could use special sentence syntax in automations. That’s pretty helpful because an action can be conditional, and I think you can even make them conditional based on which specific trigger fired the automation.
It still seems odd that I’d have to make separate automations for each helper I want to address (or separate automation conditions for each), as opposed to having the spoken command have a “variable” and then use that variable to determine which input help to return the value of. But if that’s possible, maybe it’s just beyond my skill level.
Thanks for the references, but at least one example from https://www.home-assistant.io/integrations/intent_script/ seems to be wrong/outdated.
From that page:
Local lists
Sometimes you don’t need a slot list available for all intents and sentences, so you can define one locally, making it usable only in the context of the intent data (like a collection of sentences) where it was defined. For example:
language: en
intents:
AddListItem:
data:
- sentences:
- add {item} to [my] shopping list
lists:
item:
wildcard: true
This is the code in my conversations.yaml
:
intents:
HowManyDaysUntil:
data:
- sentences:
- how many days until {countdownname}
lists:
countdownname:
- "this"
- "that"
Here are the only difference I see between my code and the example above:
language: en
(but when I add it, I get Invalid config for 'conversation' at conversations.yaml, line 1: 'language' is an invalid option for 'conversation', check: conversation->language
)However, this yaml gets Invalid config for 'conversation' at conversations.yaml, line 9: value should be a string 'conversation->intents->HowManyDaysUntil->0', got None
Perhaps I can’t have intents in conversations.yaml? Or maybe not lists? I started this project by editing config/intents/sentences/en/_cmmon.yaml
but that’s a bad idea because an update would wipe my customizations. What’s the appropriate place for me to add custom sentences/intents/responses/lists?
You’re welcome!
From what I understand, a timer’s duration is the amount of time the timer was set to run for when it was started - the total time, not the time remaining.
If you’re wanting to determine the time remaining in an active timer, you need something like:
{% set finish_time = state_attr('timer.timer_entity_id', 'finishes_at') %}
{{ '00:00' if finish_time == None else (as_datetime(finish_time) - now()).total_seconds() | timestamp_custom('%H:%M', false) }}
Or this version, which breaks hours and minutes into speakable parts:
{% set finish_time = state_attr('timer.timer_entity_id', 'finishes_at') %}
{% set hours, minutes = ('00:00' if finish_time == None else (as_datetime(finish_time) - now()).total_seconds() | timestamp_custom('%H:%M', false)).split(':') | map('int') %}
{{ '' if hours == 0 else hours ~ ' hour' if hours == 1 else hours ~ ' hours' }} {{ ' and ' if hours > 0 }} {{ minutes ~ ' minute' if minutes == 1 else minutes ~ ' minutes' }}
I haven’t done any research yet. Gitea includes an oauth2 provider. Does Forgejo also provide oauth2 authentication with a similar feature set?
Smart Home Junkie’s method worked perfect! And it was quite easy and quick. Thanks for the referemce @paf@jlai.lu!
2023 has been the Year of the Voice, and please stay tuned, as we will host a final 5th chapter live stream on our YouTube channel on 13 December 2023, at 12:00 PST / 21:00 CET! But that is not the end of the voice journey… Be sure to tune in!
Thanks for checking. So I’m still too scared at the moment. :)
I want to flash my vac with Valetudo - my goal is to run-only, no cloud. But I read through Valetudo’s instructions and it’s too scary for me. I don’t think I have the skills to solder a board together and I barely understand all the steps. I can follow instructions, but there’s one point where they say something like “do these few steps within 180 seconds or you may brick your vacuum” - that’s too much risk for a ~$1000 vacuum. I didn’t upgrade the firmware on my vac, so maybe someday the process will be slightly easier and I’ll take the risk.
Will the valetudo vacuum card/integration work with an un-valetudo-ed vacuum?
Also, while this integration recognizes the mopping capability of the vacuum (it shows the status of the mop heads in terms of dry or moist), it doesn’t seem to offer the ability to control the mopping functionality (I can’t see a way to change the mop settings like I can with the vacuum’s fan speed). Interesting.
That worked wonderfully! Thanks, @ipp0@lemmy.world.
I had originally tried this HACS integration when I was first setting up my vacuum and it failed. The reason turned out to be a subnet issue, but by the time I figured that out, I had forgotten about the Dreame Vacuum integration.
Do you know if the Dreame Vacuum integration supports the video camera on the vacuum? Or is there another way to view the vacuum’s camera in HA?
I enabled the assist_pipeline and retrieved and listened to the audio files from my Echos, but when I tried to look at the esphome/m5stack-atom-echo-wake-word.yaml
file to edit the values for noise_suppression_level
, auto_gain
, or volume_multiplier
, the file doesn’t exist. I do have an esphome
directory and it contains m5stack-atom-echo-xxxxxx.yaml
files for each of my echos, but inside those .yaml files there is no voice_assistant
section.
Can you please paste the contents of your m5stack-atom-echo-wake-word.yaml
file (obfuscating anything private, of course)? I’ll try manually creating this file to see if the Echos recognize it.
Thanks!
I changed the VM’s CPU type in Proxmox and gave the VM more resources (most of the hosts’s RAM and CPU cores) and the delays cut in half to around 16 seconds. So I know what’s causing my delay (or probably most of it). I guess I need a beefier box.
Good questions. I haven’t talked to the assistant through the browser or phone yet -that’s a good way to help narrow down what process might be causing delay.
I’m running HAOS in proxmox on a mini PC with a celeron. A couple people have said they’re using beefy hardware, so I might need a new box.
I don’t yet know the range of these Echoes, but they seem to do a great job listening. They also have a speaker but it sounds super wuiet, not really useful. If I want a verbal response I’ll have to push it through other speakers via an automation.
Excellent troubleshooting tip. Thanks.
I’m embarassed but very pleased that your example also taught me about
set_conversation_response
! I had been using tts.speak, which meant I had to define a specific media player, which wasn’t always I wanted to do. This is great!