mycroft package

mycroft.skills

MycroftSkill class - Base class for all Mycroft skills

class mycroft.MycroftSkill(name=None, bus=None, use_settings=True)[source]

Base class for mycroft skills providing common behaviour and parameters to all Skill implementations.

For information on how to get started with creating mycroft skills see https://https://mycroft.ai/documentation/skills/introduction-developing-skills/

Parameters:
  • name (str) – skill name
  • bus (MycroftWebsocketClient) – Optional bus connection
  • use_settings (bool) – Set to false to not use skill settings at all
acknowledge()[source]

Acknowledge a successful request.

This method plays a sound to acknowledge a request that does not require a verbal response. This is intended to provide simple feedback to the user that their request was handled successfully.

add_event(name, handler, handler_info=None, once=False)[source]

Create event handler for executing intent or other event.

Parameters:
  • name (string) – IntentParser name
  • handler (func) – Method to call
  • handler_info (string) – Base message when reporting skill event handler status on messagebus.
  • once (bool, optional) – Event handler will be removed after it has been run once.
ask_selection(options, dialog='', data=None, min_conf=0.65, numeric=False)[source]

Read options, ask dialog question and wait for an answer.

This automatically deals with fuzzy matching and selection by number e.g.

“first option” “last option” “second option” “option number four”
Parameters:
  • options (list) – list of options to present user
  • dialog (str) – a dialog id or string to read AFTER all options
  • data (dict) – Data used to render the dialog
  • min_conf (float) – minimum confidence for fuzzy match, if not reached return None
  • numeric (bool) – speak options as a numeric menu
Returns:

list element selected by user, or None

Return type:

string

ask_yesno(prompt, data=None)[source]

Read prompt and wait for a yes/no answer

This automatically deals with translation and common variants, such as ‘yeah’, ‘sure’, etc.

Parameters:
  • prompt (str) – a dialog id or string to read
  • data (dict) – response data
Returns:

‘yes’, ‘no’ or whatever the user response if not

one of those, including None

Return type:

string

bind(bus)[source]

Register messagebus emitter with skill.

Parameters:bus – Mycroft messagebus connection
cancel_all_repeating_events()[source]

Cancel any repeating events started by the skill.

cancel_scheduled_event(name)[source]

Cancel a pending event. The event will no longer be scheduled to be executed

Parameters:name (str) – reference name of event (from original scheduling)
config_core

Mycroft global configuration. (dict)

converse(utterances, lang=None)[source]

Handle conversation.

This method gets a peek at utterances before the normal intent handling process after a skill has been invoked once.

To use, override the converse() method and return True to indicate that the utterance has been handled.

Parameters:
  • utterances (list) – The utterances from the user. If there are multiple utterances, consider them all to be transcription possibilities. Commonly, the first entry is the user utt and the second is normalized() version of the first utterance
  • lang – language the utterance is in, None for default
Returns:

True if an utterance was handled, otherwise False

Return type:

bool

default_shutdown()[source]

Parent function called internally to shut down everything.

Shuts down known entities and calls skill specific shutdown method.

disable_intent(intent_name)[source]

Disable a registered intent if it belongs to this skill.

Parameters:intent_name (string) – name of the intent to be disabled
Returns:True if disabled, False if it wasn’t registered
Return type:bool
enable_intent(intent_name)[source]

(Re)Enable a registered intent if it belongs to this skill.

Parameters:intent_name – name of the intent to be enabled
Returns:True if enabled, False if it wasn’t registered
Return type:bool
file_system

Filesystem access to skill specific folder. See mycroft.filesystem for details.

find_resource(res_name, res_dirname=None)[source]

Find a resource file

Searches for the given filename using this scheme: 1) Search the resource lang directory:

<skill>/<res_dirname>/<lang>/<res_name>
  1. Search the resource directory:
    <skill>/<res_dirname>/<res_name>
  2. Search the locale lang directory or other subdirectory:
    <skill>/locale/<lang>/<res_name> or <skill>/locale/<lang>/…/<res_name>
Parameters:
  • res_name (string) – The resource name to be found
  • res_dirname (string, optional) – A skill resource directory, such ‘dialog’, ‘vocab’, ‘regex’ or ‘ui’. Defaults to None.
Returns:

The full path to the resource file or None if not found

Return type:

string

get_intro_message()[source]

Get a message to speak on first load of the skill.

Useful for post-install setup instructions.

Returns:message that will be spoken to the user
Return type:str
get_response(dialog='', data=None, validator=None, on_fail=None, num_retries=-1)[source]

Get response from user.

If a dialog is supplied it is spoken, followed immediately by listening for a user response. If the dialog is omitted listening is started directly.

The response can optionally be validated before returning.

Example

color = self.get_response(‘ask.favorite.color’)

Parameters:
  • dialog (str) – Optional dialog to speak to the user
  • data (dict) – Data used to render the dialog
  • validator (any) –

    Function with following signature def validator(utterance):

    return utterance != “red”
  • on_fail (any) –
    Dialog or function returning literal string
    to speak on invalid input. For example:
    def on_fail(utterance):
    return “nobody likes the color red, pick another”
  • num_retries (int) – Times to ask user for input, -1 for infinite NOTE: User can not respond and timeout or say “cancel” to stop
Returns:

User’s reply or None if timed out or canceled

Return type:

str

get_scheduled_event_status(name)[source]

Get scheduled event data and return the amount of time left

Parameters:name (str) – reference name of event (from original scheduling)
Returns:the time left in seconds
Return type:int
Raises:Exception – Raised if event is not found
handle_disable_intent(message)[source]

Listener to disable a registered intent if it belongs to this skill.

handle_enable_intent(message)[source]

Listener to enable a registered intent if it belongs to this skill.

handle_remove_cross_context(message)[source]

Remove global context from intent service.

handle_set_cross_context(message)[source]

Add global context to intent service.

handle_settings_change(message)[source]

Update settings if the remote settings changes apply to this skill.

The skill settings downloader uses a single API call to retrieve the settings for all skills. This is done to limit the number API calls. A “mycroft.skills.settings.changed” event is emitted for each skill that had their settings changed. Only update this skill’s settings if its remote settings were among those changed

initialize()[source]

Perform any final setup needed for the skill.

Invoked after the skill is fully constructed and registered with the system.

lang

Get the configured language.

load_data_files(root_directory=None)[source]

Load data files (intents, dialogs, etc).

Parameters:root_directory (str) – root folder to use when loading files.
load_regex_files(root_directory)[source]

Load regex files found under the skill directory.

Parameters:root_directory (str) – root folder to use when loading files
load_vocab_files(root_directory)[source]

Load vocab files found under root_directory.

Parameters:root_directory (str) – root folder to use when loading files
location

Get the JSON data struction holding location information.

location_pretty

Get a more ‘human’ version of the location as a string.

location_timezone

Get the timezone code, such as ‘America/Los_Angeles’

log

Skill logger instance

make_active()[source]

Bump skill to active_skill list in intent_service.

This enables converse method to be called even without skill being used in last 5 minutes.

register_entity_file(entity_file)[source]

Register an Entity file with the intent service.

An Entity file lists the exact values that an entity can hold. For example:

=== ask.day.intent === Is it {weekend}?

=== weekend.entity === Saturday Sunday

Parameters:entity_file (string) – name of file that contains examples of an entity. Must end with ‘.entity’
register_intent(intent_parser, handler)[source]

Register an Intent with the intent service.

Parameters:
  • intent_parser – Intent, IntentBuilder object or padatious intent file to parse utterance for the handler.
  • handler (func) – function to register with intent
register_intent_file(intent_file, handler)[source]

Register an Intent file with the intent service.

For example:

=== food.order.intent === Order some {food}. Order some {food} from {place}. I’m hungry. Grab some {food} from {place}.

Optionally, you can also use <register_entity_file> to specify some examples of {food} and {place}

In addition, instead of writing out multiple variations of the same sentence you can write:

=== food.order.intent === (Order | Grab) some {food} (from {place} | ). I’m hungry.

Parameters:
  • intent_file – name of file that contains example queries that should activate the intent. Must end with ‘.intent’
  • handler – function to register with intent
register_regex(regex_str)[source]

Register a new regex. :param regex_str: Regex string

register_resting_screen()[source]

Registers resting screen from the resting_screen_handler decorator.

This only allows one screen and if two is registered only one will be used.

register_vocabulary(entity, entity_type)[source]

Register a word to a keyword

Parameters:
  • entity – word to register
  • entity_type – Intent handler entity to tie the word to
reload_skill

allow reloading (default True)

remove_context(context)[source]

Remove a keyword from the context manager.

remove_cross_skill_context(context)[source]

Tell all skills to remove a keyword from the context manager.

remove_event(name)[source]

Removes an event from bus emitter and events list.

Parameters:name (string) – Name of Intent or Scheduler Event
Returns:True if found and removed, False if not found
Return type:bool
report_metric(name, data)[source]

Report a skill metric to the Mycroft servers.

Parameters:
  • name (str) – Name of metric. Must use only letters and hyphens
  • data (dict) – JSON dictionary to report. Must be valid JSON
schedule_event(handler, when, data=None, name=None)[source]

Schedule a single-shot event.

Parameters:
  • handler – method to be called
  • when (datetime/int/float) – datetime (in system timezone) or number of seconds in the future when the handler should be called
  • data (dict, optional) – data to send when the handler is called
  • name (str, optional) – reference name NOTE: This will not warn or replace a previously scheduled event of the same name.
schedule_repeating_event(handler, when, frequency, data=None, name=None)[source]

Schedule a repeating event.

Parameters:
  • handler – method to be called
  • when (datetime) – time (in system timezone) for first calling the handler, or None to initially trigger <frequency> seconds from now
  • frequency (float/int) – time in seconds between calls
  • data (dict, optional) – data to send when the handler is called
  • name (str, optional) – reference name, must be unique
send_email(title, body)[source]

Send an email to the registered user’s email.

Parameters:
  • title (str) – Title of email
  • body (str) – HTML body of email. This supports simple HTML like bold and italics
set_context(context, word='', origin='')[source]

Add context to intent service

Parameters:
  • context – Keyword
  • word – word connected to keyword
  • origin – origin of context
set_cross_skill_context(context, word='')[source]

Tell all skills to add a context to intent service

Parameters:
  • context – Keyword
  • word – word connected to keyword
shutdown()[source]

Optional shutdown proceedure implemented by subclass.

This method is intended to be called during the skill process termination. The skill implementation must shutdown all processes and operations in execution.

speak(utterance, expect_response=False, wait=False)[source]

Speak a sentence.

Parameters:
  • utterance (str) – sentence mycroft should speak
  • expect_response (bool) – set to True if Mycroft should listen for a response immediately after speaking the utterance.
  • wait (bool) – set to True to block while the text is being spoken.
speak_dialog(key, data=None, expect_response=False, wait=False)[source]

Speak a random sentence from a dialog file.

Parameters:
  • key (str) – dialog file key (e.g. “hello” to speak from the file “locale/en-us/hello.dialog”)
  • data (dict) – information used to populate sentence
  • expect_response (bool) – set to True if Mycroft should listen for a response immediately after speaking the utterance.
  • wait (bool) – set to True to block while the text is being spoken.
stop()[source]

Optional method implemented by subclass.

translate(text, data=None)[source]

Load a translatable single string resource

The string is loaded from a file in the skill’s dialog subdirectory
‘dialog/<lang>/<text>.dialog’

The string is randomly chosen from the file and rendered, replacing mustache placeholders with values found in the data dictionary.

Parameters:
  • text (str) – The base filename (no extension needed)
  • data (dict, optional) – a JSON dictionary
Returns:

A randomly chosen string from the file

Return type:

str

translate_list(list_name, data=None)[source]

Load a list of translatable string resources

The strings are loaded from a list file in the skill’s dialog subdirectory.

‘dialog/<lang>/<list_name>.list’

The strings are loaded and rendered, replacing mustache placeholders with values found in the data dictionary.

Parameters:
  • list_name (str) – The base filename (no extension needed)
  • data (dict, optional) – a JSON dictionary
Returns:

The loaded list of strings with items in consistent

positions regardless of the language.

Return type:

list of str

translate_namedvalues(name, delim=', ')[source]

Load translation dict containing names and values.

This loads a simple CSV from the ‘dialog’ folders. The name is the first list item, the value is the second. Lines prefixed with # or // get ignored

Parameters:
  • name (str) – name of the .value file, no extension needed
  • delim (char) – delimiter character used, default is ‘,’
Returns:

name and value dictionary, or empty dict if load fails

Return type:

dict

translate_template(template_name, data=None)[source]

Load a translatable template.

The strings are loaded from a template file in the skill’s dialog subdirectory.

‘dialog/<lang>/<template_name>.template’

The strings are loaded and rendered, replacing mustache placeholders with values found in the data dictionary.

Parameters:
  • template_name (str) – The base filename (no extension needed)
  • data (dict, optional) – a JSON dictionary
Returns:

The loaded template file

Return type:

list of str

update_scheduled_event(name, data=None)[source]

Change data of event.

Parameters:
  • name (str) – reference name of event (from original scheduling)
  • data (dict) – event data
voc_match(utt, voc_filename, lang=None)[source]

Determine if the given utterance contains the vocabulary provided.

Checks for vocabulary match in the utterance instead of the other way around to allow the user to say things like “yes, please” and still match against “Yes.voc” containing only “yes”. The method first checks in the current skill’s .voc files and secondly the “res/text” folder of mycroft-core. The result is cached to avoid hitting the disk each time the method is called.

Parameters:
  • utt (str) – Utterance to be tested
  • voc_filename (str) – Name of vocabulary file (e.g. ‘yes’ for ‘res/text/en-us/yes.voc’)
  • lang (str) – Language code, defaults to self.long
Returns:

True if the utterance has the given vocabulary it

Return type:

bool

CommonIoTSkill class

class mycroft.skills.common_iot_skill.CommonIoTSkill(name=None, bus=None, use_settings=True)[source]

Bases: mycroft.skills.mycroft_skill.mycroft_skill.MycroftSkill, abc.ABC

Skills that want to work with the CommonIoT system should extend this class. Subclasses will be expected to implement two methods, can_handle and run_request. See the documentation for those functions for more details on how they are expected to behave.

Subclasses may also register their own entities and scenes. See the register_entities and register_scenes methods for details.

This class works in conjunction with a controller skill. The controller registers vocabulary and intents to capture IoT related requests. It then emits messages on the messagebus that will be picked up by all skills that extend this class. Each skill will have the opportunity to declare whether or not it can handle the given request. Skills that acknowledge that they are capable of handling the request will be considered candidates, and after a short timeout, a winner, or winners, will be chosen. With this setup, a user can have several IoT systems, and control them all without worry that skills will step on each other.

bind(bus)[source]

Overrides MycroftSkill.bind.

This is called automatically during setup, and need not otherwise be used.

Subclasses that override this method must call this via super in their implementation.

Parameters:bus
can_handle(request: mycroft.skills.common_iot_skill.IoTRequest)[source]

Determine if an IoTRequest can be handled by this skill.

This method must be implemented by all subclasses.

An IoTRequest contains several properties (see the documentation for that class). This method should return True if and only if this skill can take the appropriate ‘action’ when considering _all other properties of the request_. In other words, a partial match, one in which any piece of the IoTRequest is not known to this skill, and is not None, this should return (False, None).

Parameters:request – IoTRequest
Returns: (boolean, dict)

True if and only if this skill knows about all the properties set on the IoTRequest, and a dict containing callback_data. If this skill is chosen to handle the request, this dict will be supplied to run_request.

Note that the dictionary will be sent over the bus, and thus must be JSON serializable.

get_entities() → [<class 'str'>][source]

Get a list of custom entities.

This is intended to be overridden by subclasses, though it it not required (the default implementation will return an empty list).

The strings returned by this function will be registered as ENTITY values with the intent parser. Skills should provide group names, user aliases for specific devices, or anything else that might represent a THING or a set of THINGs, e.g. ‘bedroom’, ‘lamp’, ‘front door.’ This allows commands that don’t explicitly include a THING to still be handled, e.g. “bedroom off” as opposed to “bedroom lights off.”

get_scenes() → [<class 'str'>][source]

Get a list of custom scenes.

This method is intended to be overridden by subclasses, though it is not required. The strings returned by this function will be registered as SCENE values with the intent parser. Skills should provide user defined scene names that they are aware of and capable of handling, e.g. “relax,” “movie time,” etc.

register_entities_and_scenes()[source]

This method will register this skill’s scenes and entities.

This should be called in the skill’s initialize method, at some point after get_entities and get_scenes can be expected to return correct results.

run_request(request: mycroft.skills.common_iot_skill.IoTRequest, callback_data: dict)[source]

Handle an IoT Request.

All subclasses must implement this method.

When this skill is chosen as a winner, this function will be called. It will be passed an IoTRequest equivalent to the one that was supplied to can_handle, as well as the callback_data returned by can_handle.

Parameters:
  • request – IoTRequest
  • callback_data – dict
speak(utterance, *args, **kwargs)[source]

Speak a sentence.

Parameters:
  • utterance (str) – sentence mycroft should speak
  • expect_response (bool) – set to True if Mycroft should listen for a response immediately after speaking the utterance.
  • wait (bool) – set to True to block while the text is being spoken.
supported_request_version

Get the supported IoTRequestVersion

By default, this returns IoTRequestVersion.V1. Subclasses should override this to indicate higher levels of support.

The documentation for IoTRequestVersion provides a reference indicating which fields are included in each version. Note that you should always take the latest, and account for all request fields.

CommonPlaySkill class

class mycroft.skills.common_play_skill.CommonPlaySkill(name=None, bus=None)[source]

Bases: mycroft.skills.mycroft_skill.mycroft_skill.MycroftSkill, abc.ABC

To integrate with the common play infrastructure of Mycroft skills should use this base class and override the two methods CPS_match_query_phrase (for checking if the skill can play the utterance) and CPS_start for launching the media.

The class makes the skill available to queries from the mycroft-playback-control skill and no special vocab for starting playback is needed.

CPS_match_query_phrase(phrase)[source]

Analyze phrase to see if it is a play-able phrase with this skill.

Parameters:phrase (str) – User phrase uttered after “Play”, e.g. “some music”
Returns:
Tuple containing
a string with the appropriate matching phrase, the PlayMatch type, and optionally data to return in the callback if the match is selected.
Return type:(match, CPSMatchLevel[, callback_data]) or None
CPS_play(*args, **kwargs)[source]

Begin playback of a media file or stream

Normally this method will be invoked with somthing like:
self.CPS_play(url)
Advanced use can also include keyword arguments, such as:
self.CPS_play(url, repeat=True)
Parameters:as the Audioservice.play method (same) –
CPS_start(phrase, data)[source]

Begin playing whatever is specified in ‘phrase’

Parameters:
  • phrase (str) – User phrase uttered after “Play”, e.g. “some music”
  • data (dict) – Callback data specified in match_query_phrase()
bind(bus)[source]

Overrides the normal bind method. Adds handlers for play:query and play:start messages allowing interaction with the playback control skill.

This is called automatically during setup, and need not otherwise be used.

stop()[source]

Optional method implemented by subclass.

CommonQuerySkill class

class mycroft.skills.common_query_skill.CommonQuerySkill(name=None, bus=None)[source]

Bases: mycroft.skills.mycroft_skill.mycroft_skill.MycroftSkill, abc.ABC

Question answering skills should be based on this class. The skill author needs to implement CQS_match_query_phrase returning an answer and can optionally implement CQS_action to perform additional actions if the skill’s answer is selected.

This class works in conjunction with skill-query which collects answers from several skills presenting the best one available.

CQS_action(phrase, data)[source]

Take additional action IF the skill is selected. The speech is handled by the common query but if the chosen skill wants to display media, set a context or prepare for sending information info over e-mail this can be implemented here.

Parameters:
  • phrase (str) – User phrase uttered after “Play”, e.g. “some music”
  • data (dict) – Callback data specified in match_query_phrase()
CQS_match_query_phrase(phrase)[source]

Analyze phrase to see if it is a play-able phrase with this skill. Needs to be implemented by the skill.

Parameters:phrase (str) – User phrase uttered after “Play”, e.g. “some music”
Returns:
Tuple containing
a string with the appropriate matching phrase, the PlayMatch type, and optionally data to return in the callback if the match is selected.
Return type:(match, CQSMatchLevel[, callback_data]) or None
bind(bus)[source]

Overrides the default bind method of MycroftSkill.

This registers messagebus handlers for the skill during startup but is nothing the skill author needs to consider.

FallbackSkill class

class mycroft.FallbackSkill(name=None, bus=None, use_settings=True)[source]

Bases: mycroft.skills.mycroft_skill.mycroft_skill.MycroftSkill

Fallbacks come into play when no skill matches an Adapt or closely with a Padatious intent. All Fallback skills work together to give them a view of the user’s utterance. Fallback handlers are called in an order determined the priority provided when the the handler is registered.

Priority Who? Purpose
1-4 RESERVED Unused for now, slot for pre-Padatious if needed
5 MYCROFT Padatious near match (conf > 0.8)
6-88 USER General
89 MYCROFT Padatious loose match (conf > 0.5)
90-99 USER Uncaught intents
100+ MYCROFT Fallback Unknown or other future use

Handlers with the numerically lowest priority are invoked first. Multiple fallbacks can exist at the same priority, but no order is guaranteed.

A Fallback can either observe or consume an utterance. A consumed utterance will not be see by any other Fallback handlers.

default_shutdown()[source]

Remove all registered handlers and perform skill shutdown.

classmethod make_intent_failure_handler(bus)[source]

Goes through all fallback handlers until one returns True

register_fallback(handler, priority)[source]

Register a fallback with the list of fallback handlers and with the list of handlers registered by this instance

classmethod remove_fallback(handler_to_del)[source]

Remove a fallback handler.

Parameters:handler_to_del – reference to handler
remove_instance_handlers()[source]

Remove all fallback handlers registered by the fallback skill.

AudioService class

class mycroft.skills.audioservice.AudioService(bus)[source]

Bases: object

AudioService class for interacting with the audio subsystem

Parameters:bus – Mycroft messagebus connection
available_backends()[source]

Return available audio backends.

Returns:dict with backend names as keys
next()[source]

Change to next track.

pause()[source]

Pause playback.

play(tracks=None, utterance=None, repeat=None)[source]

Start playback.

Parameters:
  • tracks – track uri or list of track uri’s Each track can be added as a tuple with (uri, mime) to give a hint of the mime type to the system
  • utterance – forward utterance for further processing by the audio service.
  • repeat – if the playback should be looped
prev()[source]

Change to previous track.

queue(tracks=None)[source]

Queue up a track to playing playlist.

Parameters:tracks – track uri or list of track uri’s
resume()[source]

Resume paused playback.

seek(seconds=1)[source]

seek X seconds

Args:
seconds (int): number of seconds to seek, if negative rewind
seek_backward(seconds=1)[source]

rewind X seconds

Args:
seconds (int): number of seconds to rewind
seek_forward(seconds=1)[source]

skip ahead X seconds

Args:
seconds (int): number of seconds to skip
stop()[source]

Stop the track.

track_info()[source]

Request information of current playing track.

Returns:Dict with track info.

intent_handler decorator

mycroft.intent_handler(intent_parser)[source]

Decorator for adding a method as an intent handler.

intent_file_handler decorator

mycroft.intent_file_handler(intent_file)[source]

Decorator for adding a method as an intent file handler.

This decorator is deprecated, use intent_handler for the same effect.

adds_context decorator

mycroft.adds_context(context, words='')[source]

Adds context to context manager.

removes_context decorator

mycroft.removes_context(context)[source]

Removes context from the context manager.

mycroft.util

Parsing functions for extracting data from natural speech.

Formatting functions for producing natural speech from common datatypes such as numbers, dates and times.

A collection of functions for handling local, system and global times.