muspy
A toolkit for symbolic music generation.
MusPy is an open source Python library for symbolic music generation. It provides essential tools for developing a music generation system, including dataset management, data I/O, data preprocessing and model evaluation.
Features
Dataset management system for commonly used datasets with interfaces to PyTorch and TensorFlow.
Data I/O for common symbolic music formats (e.g., MIDI, MusicXML and ABC) and interfaces to other symbolic music libraries (e.g., music21, mido, pretty_midi and Pypianoroll).
Implementations of common music representations for music generation, including the pitch-based, the event-based, the piano-roll and the note-based representations.
Model evaluation tools for music generation systems, including audio rendering, score and piano-roll visualizations and objective metrics.
- class muspy.Base(**kwargs)[source]
Base class for MusPy classes.
This is the base class for MusPy classes. It provides two handy I/O methods—from_dict and to_ordered_dict. It also provides intuitive repr as well as methods pretty_str and print for beautifully printing the content.
In addition, hash is implemented by hash(repr(self)). Comparisons between two Base objects are also supported, where equality check will compare all attributes, while ‘less than’ and ‘greater than’ will only compare the time attribute.
Hint
To implement a new class in MusPy, please inherit from this class and set the following class variables properly.
_attributes: An OrderedDict with attribute names as keys and their types as values.
_optional_attributes: A list of optional attribute names.
_list_attributes: A list of attributes that are lists.
Take
muspy.Note
for example.:_attributes = OrderedDict( [ ("time", int), ("duration", int), ("pitch", int), ("velocity", int), ("pitch_str", str), ] ) _optional_attributes = ["pitch_str"]
See also
muspy.ComplexBase
Base class that supports advanced operations on list attributes.
- classmethod from_dict(dict_, strict=False, cast=False)[source]
Return an instance constructed from a dictionary.
Instantiate an object whose attributes and the corresponding values are given as a dictionary.
- Parameters
- Returns
- Return type
Constructed object.
- to_ordered_dict(skip_missing=True, deepcopy=True)[source]
Return the object as an OrderedDict.
Return an ordered dictionary that stores the attributes and their values as key-value pairs.
- Parameters
- Returns
A dictionary that stores the attributes and their values as key-value pairs, e.g., {“attr1”: value1, “attr2”: value2}.
- Return type
OrderedDict
- copy()[source]
Return a shallow copy of the object.
This is equivalent to
copy.copy(self)()
.- Returns
- Return type
Shallow copy of the object.
- deepcopy()[source]
Return a deep copy of the object.
This is equivalent to
copy.deepcopy(self)()
- Returns
- Return type
Deep copy of the object.
- pretty_str(skip_missing=True)[source]
Return the attributes as a string in a YAML-like format.
- Parameters
skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
- Returns
Stored data as a string in a YAML-like format.
- Return type
See also
muspy.Base.print()
Print the attributes in a YAML-like format.
- print(skip_missing=True)[source]
Print the attributes in a YAML-like format.
- Parameters
skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
See also
muspy.Base.pretty_str()
Return the the attributes as a string in a YAML-like format.
- validate_type(attr=None, recursive=True)[source]
Raise an error if an attribute is of an invalid type.
This will apply recursively to an attribute’s attributes.
- Parameters
- Returns
- Return type
Object itself.
See also
muspy.Base.is_valid_type()
Return True if an attribute is of a valid type.
muspy.Base.validate()
Raise an error if an attribute has an invalid type or value.
- validate(attr=None, recursive=True)[source]
Raise an error if an attribute has an invalid type or value.
This will apply recursively to an attribute’s attributes.
- Parameters
- Returns
- Return type
Object itself.
See also
muspy.Base.is_valid()
Return True if an attribute has a valid type and value.
muspy.Base.validate_type()
Raise an error if an attribute is of an invalid type.
- is_valid_type(attr=None, recursive=True)[source]
Return True if an attribute is of a valid type.
This will apply recursively to an attribute’s attributes.
- Parameters
- Returns
Whether the attribute is of a valid type.
- Return type
See also
muspy.Base.validate_type()
Raise an error if a certain attribute is of an invalid type.
muspy.Base.is_valid()
Return True if an attribute has a valid type and value.
- is_valid(attr=None, recursive=True)[source]
Return True if an attribute has a valid type and value.
This will recursively apply to an attribute’s attributes.
- Parameters
- Returns
Whether the attribute has a valid type and value.
- Return type
See also
muspy.Base.validate()
Raise an error if an attribute has an invalid type or value.
muspy.Base.is_valid_type()
Return True if an attribute is of a valid type.
- class muspy.ComplexBase(**kwargs)[source]
Base class that supports advanced operations on list attributes.
This class extend the Base class with advanced operations on list attributes, including append, remove_invalid, remove_duplicate and sort.
See also
muspy.Base
Base class for MusPy classes.
- append(obj)[source]
Append an object to the corresponding list.
This will automatically determine the list attributes to append based on the type of the object.
- Parameters
obj – Object to append.
- extend(other, deepcopy=False)[source]
Extend the list(s) with another object or iterable.
- Parameters
other (
muspy.ComplexBase
or iterable) – If an object of the same type is given, extend the list attributes with the corresponding list attributes of the other object. If an iterable is given, callmuspy.ComplexBase.append()
for each item.deepcopy (bool, default: False) – Whether to make deep copies of the appended objects.
- Returns
- Return type
Object itself.
- class muspy.Annotation(time, annotation, group=None)[source]
A container for annotations.
- annotation
Annotation of any type.
- Type
any
- class muspy.Chord(time, pitches, duration, velocity=None, pitches_str=None)[source]
A container for chords.
- pitches
Note pitches, as MIDI note numbers. Valid values are 0 to 127.
- Type
list of int
- velocity
Chord velocity. Valid values are 0 to 127.
- Type
int, default: muspy.DEFAULT_VELOCITY (64)
- pitches_str
Note pitches as strings, useful for distinguishing, e.g., C# and Db.
- Type
list of str, optional
- property start
Start time of the chord.
- property end
End time of the chord.
- adjust_time(func, attr=None, recursive=True)[source]
Adjust the timing of the chord.
- Parameters
- Returns
- Return type
Object itself.
- class muspy.KeySignature(time, root=None, mode=None, fifths=None, root_str=None)[source]
A container for key signatures.
- fifths
Number of sharps or flats. Positive numbers for sharps and negative numbers for flats.
- Type
int, optional
Note
A key signature can be specified either by its root (root) or the number of sharps or flats (fifths) along with its mode.
- class muspy.Metadata(schema_version='0.1', title=None, creators=None, copyright=None, collection=None, source_filename=None, source_format=None)[source]
A container for metadata.
- schema_version
Schema version.
- Type
str, default: muspy.DEFAULT_SCHEMA_VERSION
- creators
Creator(s) of the song.
- Type
list of str, optional
- class muspy.Note(time, pitch, duration, velocity=None, pitch_str=None)[source]
A container for notes.
- velocity
Note velocity. Valid values are 0 to 127.
- Type
int, default: muspy.DEFAULT_VELOCITY (64)
- property start
Start time of the note.
- property end
End time of the note.
- adjust_time(func, attr=None, recursive=True)[source]
Adjust the timing of the note.
- Parameters
- Returns
- Return type
Object itself.
- class muspy.Track(program=0, is_drum=False, name=None, notes=None, chords=None, lyrics=None, annotations=None)[source]
A container for music track.
- program
Program number, according to General MIDI specification 1. Valid values are 0 to 127.
- Type
int, default: 0 (Acoustic Grand Piano)
- notes
Musical notes.
- Type
list of
muspy.Note
, default: []
- chords
Chords.
- Type
list of
muspy.Chord
, default: []
- annotations
Annotations.
- Type
list of
muspy.Annotation
, default: []
- lyrics
Lyrics.
- Type
list of
muspy.Lyric
, default: []
Note
Indexing a Track object returns the note at a certain index. That is,
track[idx]
returnstrack.notes[idx]
. Length of a Track object is the number of notes. That is,len(track)
returnslen(track.notes)
.References
- get_end_time(is_sorted=False)[source]
Return the time of the last event.
This includes notes, chords, lyrics and annotations.
- Parameters
is_sorted (bool, default: False) – Whether all the list attributes are sorted.
- muspy.adjust_resolution(music, target=None, factor=None, rounding='round')[source]
Adjust resolution and timing of all time-stamped objects.
- Parameters
music (
muspy.Music
) – Object to adjust the resolution.target (int, optional) – Target resolution.
factor (int or float, optional) – Factor used to adjust the resolution based on the formula: new_resolution = old_resolution * factor. For example, a factor of 2 double the resolution, and a factor of 0.5 halve the resolution.
rounding ({'round', 'ceil', 'floor'} or callable, default: 'round') – Rounding mode.
- muspy.adjust_time(obj, func)[source]
Adjust the timing of time-stamped objects.
- Parameters
obj (
muspy.Music
ormuspy.Track
) – Object to adjust the timing.func (callable) – The function used to compute the new timing from the old timing, i.e., new_time = func(old_time).
See also
muspy.adjust_resolution()
Adjust the resolution and the timing of time-stamped objects.
Note
The resolution are left unchanged.
- muspy.append(obj1, obj2)[source]
Append an object to the correseponding list.
This will automatically determine the list attributes to append based on the type of the object.
- Parameters
obj1 (
muspy.ComplexBase
) – Object to which obj2 to append.obj2 – Object to be appended to obj1.
Notes
If obj1 is of type
muspy.Music
, obj2 can bemuspy.Tempo
,muspy.KeySignature
,muspy.TimeSignature
,muspy.Lyric
,muspy.Annotation
ormuspy.Track
.If obj1 is of type
muspy.Track
, obj2 can bemuspy.Note
,muspy.Chord
,muspy.Lyric
ormuspy.Annotation
.
See also
muspy.ComplexBase.append
Equivalent function.
- muspy.clip(obj, lower=0, upper=127)[source]
Clip the velocity of each note.
- Parameters
obj (
muspy.Music
,muspy.Track
ormuspy.Note
) – Object to clip.
- muspy.get_end_time(obj, is_sorted=False)[source]
Return the the time of the last event in all tracks.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations.
- Parameters
obj (
muspy.Music
ormuspy.Track
) – Object to inspect.is_sorted (bool, default: False) – Whether all the list attributes are sorted.
- muspy.get_real_end_time(music, is_sorted=False)[source]
Return the end time in realtime.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations. Assume 120 qpm (quarter notes per minute) if no tempo information is available.
- Parameters
music (
muspy.Music
) – Object to inspect.is_sorted (bool, default: False) – Whether all the list attributes are sorted.
- muspy.remove_duplicate(obj)[source]
Remove duplicate change events.
- Parameters
obj (
muspy.Music
) – Object to process.
- muspy.sort(obj)[source]
Sort all the time-stamped objects with respect to event time.
If a
muspy.Music
is given, this will sort key signatures, time signatures, lyrics and annotations, along with notes, lyrics and annotations for each track.If a
muspy.Track
is given, this will sort notes, lyrics and annotations.
- Parameters
obj (
muspy.ComplexBase
) – Object to sort.
- muspy.to_ordered_dict(obj, skip_missing=True, deepcopy=True)[source]
Return an OrderedDict converted from a Music object.
- Parameters
obj (
muspy.Base
) – Object to convert.skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
deepcopy (bool, default: True) – Whether to make deep copies of the attributes.
- Returns
Converted OrderedDict.
- Return type
OrderedDict
- muspy.transpose(obj, semitone)[source]
Transpose all the notes by a number of semitones.
- Parameters
obj (
muspy.Music
,muspy.Track
ormuspy.Note
) – Object to transpose.semitone (int) – Number of semitones to transpose the notes. A positive value raises the pitches, while a negative value lowers the pitches.
- class muspy.ABCFolderDataset(root, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None)[source]
Class for datasets storing ABC files in a folder.
See also
muspy.FolderDataset
Class for datasets storing files in a folder.
- class muspy.Dataset[source]
Base class for MusPy datasets.
To build a custom dataset, it should inherit this class and overide the methods
__getitem__
and__len__
as well as the class attribute_info
.__getitem__
should return thei
-th data sample as amuspy.Music
object.__len__
should return the size of the dataset._info
should be amuspy.DatasetInfo
instance storing the dataset information.- save(root, kind='json', n_jobs=1, ignore_exceptions=True, verbose=True, **kwargs)[source]
Save all the music objects to a directory.
- Parameters
root (str or Path) – Root directory to save the data.
kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
verbose (bool, default: True) – Whether to be verbose.
**kwargs – Keyword arguments to pass to
muspy.save()
.
- split(filename=None, splits=None, random_state=None)[source]
Return the dataset as a PyTorch dataset.
- Parameters
filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or array_like, the value is passed to
numpy.random.RandomState
, and the created RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
- to_pytorch_dataset(factory=None, representation=None, split_filename=None, splits=None, random_state=None, **kwargs)[source]
Return the dataset as a PyTorch dataset.
- Parameters
factory (Callable, optional) – Function to be applied to the Music objects. The input is a Music object, and the output is an array or a tensor.
representation (str, optional) – Target representation. See
muspy.to_representation()
for available representation.split_filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or array_like, the value is passed to
numpy.random.RandomState
, and the created RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
- Returns
Converted PyTorch dataset(s).
- Return type
class:torch.utils.data.Dataset` or Dict of :class:torch.utils.data.Dataset`
- to_tensorflow_dataset(factory=None, representation=None, split_filename=None, splits=None, random_state=None, **kwargs)[source]
Return the dataset as a TensorFlow dataset.
- Parameters
factory (Callable, optional) – Function to be applied to the Music objects. The input is a Music object, and the output is an array or a tensor.
representation (str, optional) – Target representation. See
muspy.to_representation()
for available representation.split_filename (str or Path, optional) – If given and exists, path to the file to read the split from. If None or not exists, path to save the split.
splits (float or list of float, optional) – Ratios for train-test-validation splits. If None, return the full dataset as a whole. If float, return train and test splits. If list of two floats, return train and test splits. If list of three floats, return train, test and validation splits.
random_state (int, array_like or RandomState, optional) – Random state used to create the splits. If int or array_like, the value is passed to
numpy.random.RandomState
, and the created RandomState object is used to create the splits. If RandomState, it will be used to create the splits.
- Returns
class:tensorflow.data.Dataset` or Dict of
class:tensorflow.data.dataset` – Converted TensorFlow dataset(s).
- class muspy.DatasetInfo(name=None, description=None, homepage=None, license=None)[source]
A container for dataset information.
- class muspy.EMOPIADataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
EMOPIA Dataset.
- class muspy.EssenFolkSongDatabase(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Essen Folk Song Database.
- class muspy.FolderDataset(root, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None)[source]
Class for datasets storing files in a folder.
This class extends
muspy.Dataset
to support folder datasets. To build a custom folder dataset, please refer to the documentation ofmuspy.Dataset
for details. In addition, set class attribute_extension
to the extension to look for when building the dataset and setread
to a callable that takes as inputs a filename of a source file and return the converted Music object.- Parameters
convert (bool, default: False) – Whether to convert the dataset to MusPy JSON/YAML files. If False, will check if converted data exists. If so, disable on-the-fly mode. If not, enable on-the-fly mode and warns.
kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
use_converted (bool, optional) – Force to disable on-the-fly mode and use converted data. Defaults to True if converted data exist, otherwise False.
Important
muspy.FolderDataset.converted_exists()
depends solely on a special file named.muspy.success
in the folder{root}/_converted/
, which serves as an indicator for the existence and integrity of the converted dataset. If the converted dataset is built bymuspy.FolderDataset.convert()
, the.muspy.success
file will be created as well. If the converted dataset is created manually, make sure to create the.muspy.success
file in the folder{root}/_converted/
to prevent errors.Notes
Two modes are available for this dataset. When the on-the-fly mode is enabled, a data sample is converted to a music object on the fly when being indexed. When the on-the-fly mode is disabled, a data sample is loaded from the precomputed converted data.
See also
muspy.Dataset
Base class for MusPy datasets.
- property converted_dir
Path to the root directory of the converted dataset.
- use_converted()[source]
Disable on-the-fly mode and use converted data.
- Returns
- Return type
Object itself.
- on_the_fly()[source]
Enable on-the-fly mode and convert the data on the fly.
- Returns
- Return type
Object itself.
- convert(kind='json', n_jobs=1, ignore_exceptions=True, verbose=True, **kwargs)[source]
Convert and save the Music objects.
The converted files will be named by its index and saved to
root/_converted
. The original filenames can be found in thefilenames
attribute. For example, the file atfilenames[i]
will be converted and saved to{i}.json
.- Parameters
kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
verbose (bool, default: True) – Whether to be verbose.
**kwargs – Keyword arguments to pass to
muspy.save()
.
- Returns
- Return type
Object itself.
- class muspy.HaydnOp20Dataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Haydn Op.20 Dataset.
- class muspy.HymnalDataset(root, download=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None)[source]
Hymnal Dataset.
- class muspy.HymnalTuneDataset(root, download=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None)[source]
Hymnal Dataset (tune only).
- class muspy.JSBChoralesDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Johann Sebastian Bach Chorales Dataset.
- class muspy.LakhMIDIAlignedDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Lakh MIDI Dataset - aligned subset.
- class muspy.LakhMIDIDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Lakh MIDI Dataset.
- class muspy.LakhMIDIMatchedDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Lakh MIDI Dataset - matched subset.
- class muspy.MAESTRODatasetV1(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
MAESTRO Dataset V1 (MIDI only).
- class muspy.MAESTRODatasetV2(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
MAESTRO Dataset V2 (MIDI only).
- class muspy.MAESTRODatasetV3(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
MAESTRO Dataset V3 (MIDI only).
- class muspy.Music21Dataset(composer=None)[source]
A class of datasets containing files in music21 corpus.
- Parameters
composer (str) – Name of a composer or a collection. Please refer to the music21 corpus reference page for a full list [1].
extensions (list of str) – File extensions of desired files.
References
[1] https://web.mit.edu/music21/doc/about/referenceCorpus.html
- convert(root, kind='json', n_jobs=1, ignore_exceptions=True)[source]
Convert and save the Music objects.
- Parameters
root (str or Path) – Root directory to save the data.
kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
- class muspy.MusicDataset(root, kind=None)[source]
Class for datasets of MusPy JSON/YAML files.
- Parameters
root (str or Path) – Root directory of the dataset.
kind ({'json', 'yaml'}, optional) – File formats to include in the dataset. Defaults to include both JSON and YAML files.
- root
Root directory of the dataset.
- Type
Path
- filenames
Path to the files, relative to root.
- Type
list of Path
See also
muspy.Dataset
Base class for MusPy datasets.
- class muspy.MusicNetDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
MusicNet Dataset (MIDI only).
- class muspy.NESMusicDatabase(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
NES Music Database.
- class muspy.NottinghamDatabase(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Nottingham Database.
- class muspy.RemoteABCFolderDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Base class for remote datasets storing ABC files in a folder.
See also
muspy.ABCFolderDataset
Class for datasets storing ABC files in a folder.
muspy.RemoteDataset
Base class for remote MusPy datasets.
- class muspy.RemoteDataset(root, download_and_extract=False, overwrite=False, cleanup=False, verbose=True)[source]
Base class for remote MusPy datasets.
This class extends
muspy.Dataset
to support remote datasets. To build a custom remote dataset, please refer to the documentation ofmuspy.Dataset
for details. In addition, set the class attribute_sources
to the URLs to the source files (see Notes).- Parameters
- Raises
RuntimeError: – If
download_and_extract
is False but file{root}/.muspy.success
does not exist (see below).
Important
muspy.Dataset.exists()
depends solely on a special file named.muspy.success
in directory{root}/_converted/
. This file serves as an indicator for the existence and integrity of the dataset. It will automatically be created if the dataset is successfully downloaded and extracted bymuspy.Dataset.download_and_extract()
. If the dataset is downloaded manually, make sure to create the.muspy.success
file in directory{root}/_converted/
to prevent errors.Notes
The class attribute
_sources
is a dictionary storing the following information of each source file.filename (str): Name to save the file.
url (str): URL to the file.
archive (bool): Whether the file is an archive.
md5 (str, optional): Expected MD5 checksum of the file.
sha256 (str, optional): Expected SHA256 checksum of the file.
Here is an example.:
_sources = { "example": { "filename": "example.tar.gz", "url": "https://www.example.com/example.tar.gz", "archive": True, "md5": None, "sha256": None, } }
See also
muspy.Dataset
Base class for MusPy datasets.
- class muspy.RemoteFolderDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Base class for remote datasets storing files in a folder.
- Parameters
download_and_extract (bool, default: False) – Whether to download and extract the dataset.
cleanup (bool, default: False) – Whether to remove the source archive(s).
convert (bool, default: False) – Whether to convert the dataset to MusPy JSON/YAML files. If False, will check if converted data exists. If so, disable on-the-fly mode. If not, enable on-the-fly mode and warns.
kind ({'json', 'yaml'}, default: 'json') – File format to save the data.
n_jobs (int, default: 1) – Maximum number of concurrently running jobs. If equal to 1, disable multiprocessing.
ignore_exceptions (bool, default: True) – Whether to ignore errors and skip failed conversions. This can be helpful if some source files are known to be corrupted.
use_converted (bool, optional) – Force to disable on-the-fly mode and use converted data. Defaults to True if converted data exist, otherwise False.
See also
muspy.FolderDataset
Class for datasets storing files in a folder.
muspy.RemoteDataset
Base class for remote MusPy datasets.
- class muspy.RemoteMusicDataset(root, download_and_extract=False, overwrite=False, cleanup=False, kind=None, verbose=True)[source]
Base class for remote datasets of MusPy JSON/YAML files.
- Parameters
root (str or Path) – Root directory of the dataset.
download_and_extract (bool, default: False) – Whether to download and extract the dataset.
overwrite (bool, default: False) – Whether to overwrite existing file(s).
cleanup (bool, default: False) – Whether to remove the source archive(s).
kind ({'json', 'yaml'}, optional) – File formats to include in the dataset. Defaults to include both JSON and YAML files.
verbose (bool. default: True) – Whether to be verbose.
- root
Root directory of the dataset.
- Type
Path
- filenames
Path to the files, relative to root.
- Type
list of Path
See also
muspy.MusicDataset
Class for datasets of MusPy JSON/YAML files.
muspy.RemoteDataset
Base class for remote MusPy datasets.
- class muspy.WikifoniaDataset(root, download_and_extract=False, overwrite=False, cleanup=False, convert=False, kind='json', n_jobs=1, ignore_exceptions=True, use_converted=None, verbose=True)[source]
Wikifonia dataset.
- muspy.get_dataset(key)[source]
Return a certain dataset class by key.
- Parameters
key (str) – Dataset key (case-insensitive).
- Returns
- Return type
The corresponding dataset class.
- muspy.list_datasets()[source]
Return all supported dataset classes as a list.
- Returns
- Return type
A list of all supported dataset classes.
- muspy.download_bravura_font(overwrite=False)[source]
Download the Bravura font.
- Parameters
overwrite (bool, default: False) – Whether to overwrite an existing file.
- muspy.download_musescore_soundfont(overwrite=False)[source]
Download the MuseScore General soundfont.
- Parameters
overwrite (bool, default: False) – Whether to overwrite an existing file.
- muspy.get_musescore_soundfont_dir()[source]
Return path to the MuseScore General soundfont directory.
- muspy.from_event_representation(array, resolution=24, program=0, is_drum=False, use_single_note_off_event=False, use_end_of_sequence_event=False, max_time_shift=100, velocity_bins=32, default_velocity=64, duplicate_note_mode='fifo')[source]
Decode event-based representation into a Music object.
- Parameters
array (ndarray) – Array in event-based representation to decode.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
is_drum (bool, default: False) – Whether it is a percussion track.
use_single_note_off_event (bool, default: False) – Whether to use a single note-off event for all the pitches. If True, a note-off event will close all active notes, which can lead to lossy conversion for polyphonic music.
use_end_of_sequence_event (bool, default: False) – Whether to append an end-of-sequence event to the encoded sequence.
max_time_shift (int, default: 100) – Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events.
velocity_bins (int, default: 32) – Number of velocity bins to use.
default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
duplicate_note_mode ({'fifo', 'lifo', 'all'}, default: 'fifo') –
Policy for dealing with duplicate notes. When a note off event is presetned while there are multiple correspoding note on events that have not yet been closed, we need a policy to decide which note on messages to close. This is only effective when use_single_note_off_event is False.
’fifo’ (first in first out): close the earliest note on
’lifo’ (first in first out): close the latest note on
’all’: close all note on messages
- Returns
Decoded Music object.
- Return type
References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
- muspy.from_mido(midi, duplicate_note_mode='fifo')[source]
Return a mido MidiFile object as a Music object.
- Parameters
midi (
mido.MidiFile
) – Mido MidiFile object to convert.duplicate_note_mode ({'fifo', 'lifo', 'all'}, default: 'fifo') –
Policy for dealing with duplicate notes. When a note off message is presetned while there are multiple correspoding note on messages that have not yet been closed, we need a policy to decide which note on messages to close.
’fifo’ (first in first out): close the earliest note on
’lifo’ (first in first out): close the latest note on
’all’: close all note on messages
- Returns
Converted Music object.
- Return type
- muspy.from_music21(stream, resolution=24)[source]
Return a music21 Stream object as Music or Track object(s).
- Parameters
stream (music21.stream.Stream) – Stream object to convert.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- Returns
Converted Music or Track object(s).
- Return type
- muspy.from_music21_opus(opus, resolution=24)[source]
Return a music21 Opus object as a list of Music objects.
- Parameters
opus (music21.stream.Opus) – Opus object to convert.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- Returns
Converted Music object.
- Return type
- muspy.from_music21_part(part, resolution=24)[source]
Return a music21 Part object as Track object(s).
- Parameters
part (music21.stream.Part) – Part object to parse.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- Returns
Parsed track(s).
- Return type
muspy.Track
or list ofmuspy.Track
- muspy.from_music21_score(score, resolution=24)[source]
Return a music21 Stream object as a Music object.
- Parameters
score (music21.stream.Score) – Score object to convert.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- Returns
Converted Music object.
- Return type
- muspy.from_note_representation(array, resolution=24, program=0, is_drum=False, use_start_end=False, encode_velocity=True, default_velocity=64)[source]
Decode note-based representation into a Music object.
- Parameters
array (ndarray) – Array in note-based representation to decode.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
is_drum (bool, default: False) – Whether it is a percussion track.
use_start_end (bool, default: False) – Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’.
encode_velocity (bool, default: True) – Whether to encode note velocities.
default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding. Only used when encode_velocity is True.
- Returns
Decoded Music object.
- Return type
References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
- muspy.from_object(obj, **kwargs)[source]
Return an outside object as a Music object.
- Parameters
obj – Object to convert. Supported objects are music21.Stream,
mido.MidiTrack
,pretty_midi.PrettyMIDI
, andpypianoroll.Multitrack
objects.**kwargs – Keyword arguments to pass to
muspy.from_music21()
,muspy.from_mido()
,from_pretty_midi()
orfrom_pypianoroll()
.
- Returns
Converted Music object.
- Return type
- muspy.from_pianoroll_representation(array, resolution=24, program=0, is_drum=False, encode_velocity=True, default_velocity=64)[source]
Decode piano-roll representation into a Music object.
- Parameters
array (ndarray) – Array in piano-roll representation to decode.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
is_drum (bool, default: False) – Whether it is a percussion track.
encode_velocity (bool, default: True) – Whether to encode velocities.
default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding. Only used when encode_velocity is True.
- Returns
Decoded Music object.
- Return type
References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
- muspy.from_pitch_representation(array, resolution=24, program=0, is_drum=False, use_hold_state=False, default_velocity=64)[source]
Decode pitch-based representation into a Music object.
- Parameters
array (ndarray) – Array in pitch-based representation to decode.
resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
program (int, default: 0 (Acoustic Grand Piano)) – Program number, according to General MIDI specification [1]. Valid values are 0 to 127.
is_drum (bool, default: False) – Whether it is a percussion track.
use_hold_state (bool, default: False) – Whether to use a special state for holds.
default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
- Returns
Decoded Music object.
- Return type
References
[1] https://www.midi.org/specifications/item/gm-level-1-sound-set
- muspy.from_pretty_midi(midi, resolution=None)[source]
Return a pretty_midi PrettyMIDI object as a Music object.
- Parameters
midi (
pretty_midi.PrettyMIDI
) – PrettyMIDI object to convert.resolution (int, default: muspy.DEFAULT_RESOLUTION (24)) – Time steps per quarter note.
- Returns
Converted Music object.
- Return type
- muspy.from_pypianoroll(multitrack, default_velocity=64)[source]
Return a Pypianoroll Multitrack object as a Music object.
- Parameters
multitrack (
pypianoroll.Multitrack
) – Pypianoroll Multitrack object to convert.default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
- Returns
music – Converted MusPy Music object.
- Return type
- muspy.from_pypianoroll_track(track, default_velocity=64)[source]
Return a Pypianoroll Track object as a Track object.
- Parameters
track (
pypianoroll.Track
) – Pypianoroll Track object to convert.default_velocity (int, default: muspy.DEFAULT_VELOCITY (64)) – Default velocity value to use when decoding.
- Returns
Converted track.
- Return type
- muspy.from_representation(array, kind, **kwargs)[source]
Update with the given representation.
- Parameters
array (
numpy.ndarray
) – Array in a supported representation.kind (str, {'pitch', 'pianoroll', 'event', 'note'}) – Data representation.
**kwargs – Keyword arguments to pass to
muspy.from_pitch_representation()
,muspy.from_pianoroll_representation()
,from_event_representation()
orfrom_note_representation()
.
- Returns
Converted Music object.
- Return type
- muspy.load(path, kind=None, **kwargs)[source]
Load a JSON or a YAML file into a Music object.
This is a wrapper function for
muspy.load_json()
andmuspy.load_yaml()
.- Parameters
path (str, Path or TextIO) – Path to the file or the file to to load.
kind ({'json', 'yaml'}, optional) – Format to save. Defaults to infer from the extension.
**kwargs – Keyword arguments to pass to
muspy.load_json()
ormuspy.load_yaml()
.
- Returns
Loaded Music object.
- Return type
See also
muspy.load_json()
Load a JSON file into a Music object.
muspy.load_yaml()
Load a YAML file into a Music object.
muspy.read()
Read a MIDI/MusicXML/ABC file into a Music object.
- muspy.load_json(path, compressed=None)[source]
Load a JSON file into a Music object.
- Parameters
- Returns
Loaded Music object.
- Return type
Notes
When a path is given, assume UTF-8 encoding and gzip compression if compressed=True.
- muspy.load_yaml(path, compressed=None)[source]
Load a YAML file into a Music object.
- Parameters
- Returns
Loaded Music object.
- Return type
Notes
When a path is given, assume UTF-8 encoding and gzip compression if compressed=True.
- muspy.read(path, kind=None, **kwargs)[source]
Read a MIDI/MusicXML/ABC file into a Music object.
- Parameters
path (str or Path) – Path to the file to read.
kind ({'midi', 'musicxml', 'abc'}, optional) – Format to save. Defaults to infer from the extension.
**kwargs – Keyword arguments to pass to
muspy.read_midi()
,muspy.read_musicxml()
orread_abc()
.
- Returns
Converted Music object(s).
- Return type
muspy.Music
or list ofmuspy.Music
See also
muspy.load()
Load a JSON or a YAML file into a Music object.
- muspy.read_abc(path, number=None, resolution=24)[source]
Return an ABC file into Music object(s) using music21 backend.
- Parameters
- Returns
Converted Music object(s).
- Return type
list of
muspy.Music
- muspy.read_abc_string(data_str, number=None, resolution=24)[source]
Read ABC data into Music object(s) using music21 backend.
- Parameters
- Returns
Converted Music object(s).
- Return type
- muspy.read_midi(path, backend='mido', duplicate_note_mode='fifo')[source]
Read a MIDI file into a Music object.
- Parameters
path (str or Path) – Path to the MIDI file to read.
backend ({'mido', 'pretty_midi'}, default: 'mido') – Backend to use.
duplicate_note_mode ({'fifo', 'lifo, 'all'}, default: 'fifo') –
Policy for dealing with duplicate notes. When a note off message is presetned while there are multiple correspoding note on messages that have not yet been closed, we need a policy to decide which note on messages to close. Only used when backend is ‘mido’.
’fifo’ (first in first out): close the earliest note on
’lifo’ (first in first out):close the latest note on
’all’: close all note on messages
- Returns
Converted Music object.
- Return type
- muspy.read_musescore(path, resolution=None, compressed=None)[source]
Read a MuseScore file into a Music object.
- Parameters
- Returns
Converted Music object.
- Return type
Note
This function is based on MuseScore 3. Files created by an earlier version of MuseScore might not be read correctly.
- muspy.read_musicxml(path, resolution=None, compressed=None)[source]
Read a MusicXML file into a Music object.
- Parameters
- Returns
Converted Music object.
- Return type
- muspy.drum_in_pattern_rate(music, meter)[source]
Return the ratio of drum notes in a certain drum pattern.
The drum-in-pattern rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Only drum tracks are considered. Return NaN if no drum note is found. This metric is used in [1].
\[drum\_in\_pattern\_rate = \frac{ \#(drum\_notes\_in\_pattern)}{\#(drum\_notes)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.meter (str, {'duple', 'triple'}) – Meter of the drum pattern.
- Returns
Drum-in-pattern rate.
- Return type
See also
muspy.drum_pattern_consistency()
Compute the largest drum-in-pattern rate.
References
Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- muspy.drum_pattern_consistency(music)[source]
Return the largest drum-in-pattern rate.
The drum pattern consistency is defined as the largest drum-in-pattern rate over duple and triple meters. Only drum tracks are considered. Return NaN if no drum note is found.
\[drum\_pattern\_consistency = \max_{meter}{ drum\_in\_pattern\_rate(meter)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Drum pattern consistency.
- Return type
See also
muspy.drum_in_pattern_rate()
Compute the ratio of drum notes in a certain drum pattern.
- muspy.empty_beat_rate(music)[source]
Return the ratio of empty beats.
The empty-beat rate is defined as the ratio of the number of empty beats (where no note is played) to the total number of beats. Return NaN if song length is zero. This metric is also implemented in Pypianoroll [1].
\[empty\_beat\_rate = \frac{\#(empty\_beats)}{\#(beats)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Empty-beat rate.
- Return type
See also
muspy.empty_measure_rate()
Compute the ratio of empty measures.
References
Hao-Wen Dong, Wen-Yi Hsiao, and Yi-Hsuan Yang, “Pypianoroll: Open Source Python Package for Handling Multitrack Pianorolls,” in Late-Breaking Demos of the 18th International Society for Music Information Retrieval Conference (ISMIR), 2018.
- muspy.empty_measure_rate(music, measure_resolution)[source]
Return the ratio of empty measures.
The empty-measure rate is defined as the ratio of the number of empty measures (where no note is played) to the total number of measures. Note that this metric only works for songs with a constant time signature. Return NaN if song length is zero. This metric is used in [1].
\[empty\_measure\_rate = \frac{\#(empty\_measures)}{\#(measures)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.measure_resolution (int) – Time steps per measure.
- Returns
Empty-measure rate.
- Return type
See also
muspy.empty_beat_rate()
Compute the ratio of empty beats.
References
Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- muspy.groove_consistency(music, measure_resolution)[source]
Return the groove consistency.
The groove consistency is defined as the mean hamming distance of the neighboring measures.
\[groove\_consistency = 1 - \frac{1}{T - 1} \sum_{i = 1}^{T - 1}{ d(G_i, G_{i + 1})}\]Here, \(T\) is the number of measures, \(G_i\) is the binary onset vector of the \(i\)-th measure (a one at position that has an onset, otherwise a zero), and \(d(G, G')\) is the hamming distance between two vectors \(G\) and \(G'\). Note that this metric only works for songs with a constant time signature. Return NaN if the number of measures is less than two. This metric is used in [1].
- Parameters
music (
muspy.Music
) – Music object to evaluate.measure_resolution (int) – Time steps per measure.
- Returns
Groove consistency.
- Return type
References
Shih-Lun Wu and Yi-Hsuan Yang, “The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures”, in Proceedings of the 21st International Society for Music Information Retrieval Conference, 2020.
- muspy.n_pitch_classes_used(music)[source]
Return the number of unique pitch classes used.
Drum tracks are ignored.
- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Number of unique pitch classes used.
- Return type
See also
muspy.n_pitches_used()
Compute the number of unique pitches used.
- muspy.n_pitches_used(music)[source]
Return the number of unique pitches used.
Drum tracks are ignored.
- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Number of unique pitch used.
- Return type
See also
muspy.n_pitch_class_used()
Compute the number of unique pitch classes used.
- muspy.pitch_class_entropy(music)[source]
Return the entropy of the normalized note pitch class histogram.
The pitch class entropy is defined as the Shannon entropy of the normalized note pitch class histogram. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[pitch\_class\_entropy = -\sum_{i = 0}^{11}{ P(pitch\_class=i) \times \log_2 P(pitch\_class=i)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Pitch class entropy.
- Return type
See also
muspy.pitch_entropy()
Compute the entropy of the normalized pitch histogram.
References
Shih-Lun Wu and Yi-Hsuan Yang, “The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-composed Music through Quantitative Measures”, in Proceedings of the 21st International Society for Music Information Retrieval Conference, 2020.
- muspy.pitch_entropy(music)[source]
Return the entropy of the normalized note pitch histogram.
The pitch entropy is defined as the Shannon entropy of the normalized note pitch histogram. Drum tracks are ignored. Return NaN if no note is found.
\[pitch\_entropy = -\sum_{i = 0}^{127}{ P(pitch=i) \log_2 P(pitch=i)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Pitch entropy.
- Return type
See also
muspy.pitch_class_entropy()
Compute the entropy of the normalized pitch class histogram.
- muspy.pitch_in_scale_rate(music, root, mode)[source]
Return the ratio of pitches in a certain musical scale.
The pitch-in-scale rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[pitch\_in\_scale\_rate = \frac{\#(notes\_in\_scale)}{\#(notes)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.root (int) – Root of the scale.
mode (str, {'major', 'minor'}) – Mode of the scale.
- Returns
Pitch-in-scale rate.
- Return type
See also
muspy.scale_consistency()
Compute the largest pitch-in-class rate.
References
Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- muspy.pitch_range(music)[source]
Return the pitch range.
Drum tracks are ignored. Return zero if no note is found.
- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Pitch range.
- Return type
- muspy.polyphony(music)[source]
Return the average number of pitches being played concurrently.
The polyphony is defined as the average number of pitches being played at the same time, evaluated only at time steps where at least one pitch is on. Drum tracks are ignored. Return NaN if no note is found.
\[polyphony = \frac{ \#(pitches\_when\_at\_least\_one\_pitch\_is\_on) }{ \#(time\_steps\_where\_at\_least\_one\_pitch\_is\_on) }\]- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Polyphony.
- Return type
See also
muspy.polyphony_rate()
Compute the ratio of time steps where multiple pitches are on.
- muspy.polyphony_rate(music, threshold=2)[source]
Return the ratio of time steps where multiple pitches are on.
The polyphony rate is defined as the ratio of the number of time steps where multiple pitches are on to the total number of time steps. Drum tracks are ignored. Return NaN if song length is zero. This metric is used in [1], where it is called polyphonicity.
\[polyphony\_rate = \frac{ \#(time\_steps\_where\_multiple\_pitches\_are\_on) }{ \#(time\_steps) }\]- Parameters
music (
muspy.Music
) – Music object to evaluate.threshold (int, default: 2) – Threshold of number of pitches to count into the numerator.
- Returns
Polyphony rate.
- Return type
See also
muspy.polyphony()
Compute the average number of pitches being played at the same time.
References
Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.
- muspy.scale_consistency(music)[source]
Return the largest pitch-in-scale rate.
The scale consistency is defined as the largest pitch-in-scale rate over all major and minor scales. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].
\[scale\_consistency = \max_{root, mode}{ pitch\_in\_scale\_rate(root, mode)}\]- Parameters
music (
muspy.Music
) – Music object to evaluate.- Returns
Scale consistency.
- Return type
See also
muspy.pitch_in_scale_rate()
Compute the ratio of pitches in a certain musical scale.
References
Olof Mogren, “C-RNN-GAN: Continuous recurrent neural networks with adversarial training,” in NeuIPS Workshop on Constructive Machine Learning, 2016.
- class muspy.Music(metadata=None, resolution=None, tempos=None, key_signatures=None, time_signatures=None, beats=None, lyrics=None, annotations=None, tracks=None)[source]
A universal container for symbolic music.
This is the core class of MusPy. A Music object can be constructed in the following ways.
muspy.Music()
: Construct by setting values for attributesmuspy.Music.from_dict()
: Construct from a dictionary that stores the attributes and their values as key-value pairsmuspy.read()
: Read from a MIDI, a MusicXML or an ABC filemuspy.load()
: Load from a JSON or a YAML file saved bymuspy.save()
muspy.from_object()
: Convert from a music21.Stream,mido.MidiFile
,pretty_midi.PrettyMIDI
orpypianoroll.Multitrack
object
- metadata
Metadata.
- Type
muspy.Metadata
, default: Metadata()
- resolution
Time steps per quarter note.
- Type
int, default: muspy.DEFAULT_RESOLUTION (24)
- tempos
Tempo changes.
- Type
list of
muspy.Tempo
, default: []
- key_signatures
Key signatures changes.
- Type
list of
muspy.KeySignature
, default: []
- time_signatures
Time signature changes.
- Type
list of
muspy.TimeSignature
, default: []
- beats
Beats.
- Type
list of
muspy.Beat
, default: []
- lyrics
Lyrics.
- Type
list of
muspy.Lyric
, default: []
- annotations
Annotations.
- Type
list of
muspy.Annotation
, default: []
- tracks
Music tracks.
- Type
list of
muspy.Track
, default: []
Note
Indexing a Music object returns the track of a certain index. That is,
music[idx]
returnsmusic.tracks[idx]
. Length of a Music object is the number of tracks. That is,len(music)
returnslen(music.tracks)
.- get_end_time(is_sorted=False)[source]
Return the the time of the last event in all tracks.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations.
- Parameters
is_sorted (bool, default: False) – Whether all the list attributes are sorted.
- get_real_end_time(is_sorted=False)[source]
Return the end time in realtime.
This includes tempos, key signatures, time signatures, note offsets, lyrics and annotations. Assume 120 qpm (quarter notes per minute) if no tempo information is available.
- Parameters
is_sorted (bool, default: False) – Whether all the list attributes are sorted.
- infer_beats()[source]
Infer beats from the time signature changes.
This assumes that there is a downbeat at each time signature change (this is not always true, e.g., for a pickup measure).
- Returns
List of beats inferred from the time signature changes. Return an empty list if no time signature is found.
- Return type
list of
muspy.Beat
- adjust_resolution(target=None, factor=None, rounding='round')[source]
Adjust resolution and timing of all time-stamped objects.
- Parameters
target (int, optional) – Target resolution.
factor (int or float, optional) – Factor used to adjust the resolution based on the formula: new_resolution = old_resolution * factor. For example, a factor of 2 double the resolution, and a factor of 0.5 halve the resolution.
rounding ({'round', 'ceil', 'floor'} or callable, default:) –
'round' – Rounding mode.
- Returns
- Return type
Object itself.
- transpose(semitone)[source]
Transpose all the notes by a number of semitones.
- Parameters
semitone (int) – Number of semitones to transpose the notes. A positive value raises the pitches, while a negative value lowers the pitches.
- Returns
- Return type
Object itself.
Notes
Drum tracks are skipped.
- trim(end)[source]
Trim the track.
- Parameters
end (int) – End time, excluding (i.e, the max time will be end - 1).
- Returns
- Return type
Object itself.
- save(path, kind=None, **kwargs)[source]
Save loselessly to a JSON or a YAML file.
Refer to
muspy.save()
for full documentation.
- save_json(path, **kwargs)[source]
Save loselessly to a JSON file.
Refer to
muspy.save_json()
for full documentation.
- save_yaml(path)[source]
Save loselessly to a YAML file.
Refer to
muspy.save_yaml()
for full documentation.
- write(path, kind=None, **kwargs)[source]
Write to a MIDI, a MusicXML, an ABC or an audio file.
Refer to
muspy.write()
for full documentation.
- write_midi(path, **kwargs)[source]
Write to a MIDI file.
Refer to
muspy.write_midi()
for full documentation.
- write_musicxml(path, **kwargs)[source]
Write to a MusicXML file.
Refer to
muspy.write_musicxml()
for full documentation.
- write_abc(path, **kwargs)[source]
Write to an ABC file.
Refer to
muspy.write_abc()
for full documentation.
- write_audio(path, **kwargs)[source]
Write to an audio file.
Refer to
muspy.write_audio()
for full documentation.
- to_object(kind, **kwargs)[source]
Return as an object in other libraries.
Refer to
muspy.to_object()
for full documentation.
- to_music21(**kwargs)[source]
Return as a Stream object.
Refer to
muspy.to_music21()
for full documentation.
- to_mido(**kwargs)[source]
Return as a MidiFile object.
Refer to
muspy.to_mido()
for full documentation.
- to_pretty_midi(**kwargs)[source]
Return as a PrettyMIDI object.
Refer to
muspy.to_pretty_midi()
for full documentation.
- to_pypianoroll(**kwargs)[source]
Return as a Multitrack object.
Refer to
muspy.to_pypianoroll()
for full documentation.
- to_representation(kind, **kwargs)[source]
Return in a specific representation.
Refer to
muspy.to_representation()
for full documentation.
- to_pitch_representation(**kwargs)[source]
Return in pitch-based representation.
Refer to
muspy.to_pitch_representation()
for full documentation.
- to_pianoroll_representation(**kwargs)[source]
Return in piano-roll representation.
Refer to
muspy.to_pianoroll_representation()
for full documentation.
- to_event_representation(**kwargs)[source]
Return in event-based representation.
Refer to
muspy.to_event_representation()
for full documentation.
- to_note_representation(**kwargs)[source]
Return in note-based representation.
Refer to
muspy.to_note_representation()
for full documentation.
- show(kind, **kwargs)[source]
Show visualization.
Refer to
muspy.show()
for full documentation.
- show_score(**kwargs)[source]
Show score visualization.
Refer to
muspy.show_score()
for full documentation.
- show_pianoroll(**kwargs)[source]
Show pianoroll visualization.
Refer to
muspy.show_pianoroll()
for full documentation.
- synthesize(**kwargs)[source]
Synthesize a Music object to raw audio.
Refer to
muspy.synthesize()
for full documentation.
- muspy.save(path, music, kind=None, **kwargs)[source]
Save a Music object loselessly to a JSON or a YAML file.
This is a wrapper function for
muspy.save_json()
andmuspy.save_yaml()
.- Parameters
path (str, Path or TextIO) – Path or file to save the data.
music (
muspy.Music
) – Music object to save.kind ({'json', 'yaml'}, optional) – Format to save. Defaults to infer from the extension.
**kwargs – Keyword arguments to pass to
muspy.save_json()
ormuspy.save_yaml()
.
See also
muspy.save_json()
Save a Music object to a JSON file.
muspy.save_yaml()
Save a Music object to a YAML file.
muspy.write()
Write a Music object to a MIDI/MusicXML/ABC/audio file.
Notes
The conversion can be lossy if any nonserializable object is used (for example, an Annotation object, which can store data of any type).
- muspy.save_json(path, music, skip_missing=True, ensure_ascii=False, compressed=None, **kwargs)[source]
Save a Music object to a JSON file.
- Parameters
path (str, Path or TextIO) – Path or file to save the JSON data.
music (
muspy.Music
) – Music object to save.skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
ensure_ascii (bool, default: False) – Whether to escape non-ASCII characters. Will be passed to PyYAML’s yaml.dump.
compressed (bool, optional) – Whether to save as a compressed JSON file (.json.gz). Has no effect when path is a file object. Defaults to infer from the extension (.gz).
**kwargs – Keyword arguments to pass to
json.dumps()
.
Notes
When a path is given, use UTF-8 encoding and gzip compression if compressed=True.
- muspy.save_yaml(path, music, skip_missing=True, allow_unicode=True, compressed=None, **kwargs)[source]
Save a Music object to a YAML file.
- Parameters
path (str, Path or TextIO) – Path or file to save the YAML data.
music (
muspy.Music
) – Music object to save.skip_missing (bool, default: True) – Whether to skip attributes with value None or those that are empty lists.
allow_unicode (bool, default: False) – Whether to escape non-ASCII characters. Will be passed to
json.dumps()
.compressed (bool, optional) – Whether to save as a compressed YAML file (.yaml.gz). Has no effect when path is a file object. Defaults to infer from the extension (.gz).
**kwargs – Keyword arguments to pass to yaml.dump.
Notes
When a path is given, use UTF-8 encoding and gzip compression if compressed=True.
- muspy.synthesize(music, soundfont_path=None, rate=44100, gain=None)[source]
Synthesize a Music object to raw audio.
- Parameters
music (
muspy.Music
) – Music object to write.soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
rate (int, default: 44100) – Sample rate (in samples per sec).
gain (float, optional) – Master gain (-g option) for Fluidsynth. Defaults to 1/n, where n is the number of tracks. This can be used to prevent distortions caused by clipping.
- Returns
Synthesized waveform.
- Return type
ndarray, dtype=int16, shape=(?, 2)
- muspy.to_default_event_representation(music, dtype=<class 'int'>)[source]
Encode a Music object into the default event representation.
- muspy.to_event_representation(music, use_single_note_off_event=False, use_end_of_sequence_event=False, encode_velocity=False, force_velocity_event=True, max_time_shift=100, velocity_bins=32, dtype=<class 'int'>)[source]
Encode a Music object into event-based representation.
The event-based represetantion represents music as a sequence of events, including note-on, note-off, time-shift and velocity events. The output shape is M x 1, where M is the number of events. The values encode the events. The default configuration uses 0-127 to encode note-on events, 128-255 for note-off events, 256-355 for time-shift events, and 356 to 387 for velocity events.
- Parameters
music (
muspy.Music
) – Music object to encode.use_single_note_off_event (bool, default: False) – Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music.
use_end_of_sequence_event (bool, default: False) – Whether to append an end-of-sequence event to the encoded sequence.
encode_velocity (bool, default: False) – Whether to encode velocities.
force_velocity_event (bool, default: True) – Whether to add a velocity event before every note-on event. If False, velocity events are only used when the note velocity is changed (i.e., different from the previous one).
max_time_shift (int, default: 100) – Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events.
velocity_bins (int, default: 32) – Number of velocity bins to use.
dtype (np.dtype, type or str, default: int) – Data type of the return array.
- Returns
Encoded array in event-based representation.
- Return type
ndarray, shape=(?, 1)
- muspy.to_mido(music, use_note_off_message=False)[source]
Return a Music object as a MidiFile object.
- Parameters
music (
muspy.Music
object) – Music object to convert.use_note_off_message (bool, default: False) – Whether to use note-off messages. If False, note-on messages with zero velocity are used instead. The advantage to using note-on messages at zero velocity is that it can avoid sending additional status bytes when Running Status is employed.
- Returns
Converted MidiFile object.
- Return type
- muspy.to_music21(music)[source]
Return a Music object as a music21 Score object.
- Parameters
music (
muspy.Music
) – Music object to convert.- Returns
Converted music21 Score object.
- Return type
music21.stream.Score
- muspy.to_note_representation(music, use_start_end=False, encode_velocity=True, dtype=<class 'int'>)[source]
Encode a Music object into note-based representation.
The note-based represetantion represents music as a sequence of (time, pitch, duration, velocity) tuples. For example, a note Note(time=0, duration=4, pitch=60, velocity=64) will be encoded as a tuple (0, 60, 4, 64). The output shape is N * D, where N is the number of notes and D is 4 when encode_velocity is True, otherwise D is 3. The values of the second dimension represent time, pitch, duration and velocity (discarded when encode_velocity is False).
- Parameters
music (
muspy.Music
) – Music object to encode.use_start_end (bool, default: False) – Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’.
encode_velocity (bool, default: True) – Whether to encode note velocities.
dtype (np.dtype, type or str, default: int) – Data type of the return array.
- Returns
Encoded array in note-based representation.
- Return type
ndarray, shape=(?, 3 or 4)
- muspy.to_object(music, kind, **kwargs)[source]
Return a Music object as an object in other libraries.
Supported classes are music21.Stream,
mido.MidiTrack
,pretty_midi.PrettyMIDI
andpypianoroll.Multitrack
.- Parameters
music (
muspy.Music
) – Music object to convert.kind (str, {'music21', 'mido', 'pretty_midi', 'pypianoroll'}) – Target class.
- Returns
Converted object.
- Return type
music21.Stream,
mido.MidiTrack
,pretty_midi.PrettyMIDI
orpypianoroll.Multitrack
- muspy.to_performance_event_representation(music, dtype=<class 'int'>)[source]
Encode a Music object into the performance event representation.
- muspy.to_pianoroll_representation(music, encode_velocity=True, dtype=None)[source]
Encode notes into piano-roll representation.
- Parameters
music (
muspy.Music
) – Music object to encode.encode_velocity (bool, default: True) – Whether to encode velocities. If True, a binary-valued array will be return. Otherwise, an integer array will be return.
dtype (np.dtype, type or str, optional) – Data type of the return array. Defaults to uint8 if encode_velocity is True, otherwise bool.
- Returns
Encoded array in piano-roll representation.
- Return type
ndarray, shape=(?, 128)
- muspy.to_pitch_representation(music, use_hold_state=False, dtype=<class 'int'>)[source]
Encode a Music object into pitch-based representation.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or, optionally, a hold (129).
- Parameters
music (
muspy.Music
) – Music object to encode.use_hold_state (bool, default: False) – Whether to use a special state for holds.
dtype (np.dtype, type or str, default: int) – Data type of the return array.
- Returns
Encoded array in pitch-based representation.
- Return type
ndarray, shape=(?, 1)
- muspy.to_pretty_midi(music)[source]
Return a Music object as a PrettyMIDI object.
Tempo changes are not supported yet.
- Parameters
music (
muspy.Music
object) – Music object to convert.- Returns
Converted PrettyMIDI object.
- Return type
Notes
Tempo information will not be included in the output.
- muspy.to_pypianoroll(music)[source]
Return a Music object as a Multitrack object.
- Parameters
music (
muspy.Music
) – Music object to convert.- Returns
multitrack – Converted Multitrack object.
- Return type
- muspy.to_remi_event_representation(music, dtype=<class 'int'>)[source]
Encode a Music object into the remi event representation.
- muspy.to_representation(music, kind, **kwargs)[source]
Return a Music object in a specific representation.
- Parameters
music (
muspy.Music
) – Music object to convert.kind (str, {'pitch', 'piano-roll', 'event', 'note'}) – Target representation.
- Returns
array – Converted representation.
- Return type
ndarray
- muspy.write(path, music, kind=None, **kwargs)[source]
Write a Music object to a MIDI/MusicXML/ABC/audio file.
- Parameters
path (str or Path) – Path to write the file.
music (
muspy.Music
) – Music object to convert.kind ({'midi', 'musicxml', 'abc', 'audio'}, optional) – Format to save. Defaults to infer from the extension.
See also
muspy.save()
Save a Music object loselessly to a JSON or a YAML file.
- muspy.write_audio(path, music, audio_format=None, soundfont_path=None, rate=44100, gain=None)[source]
Write a Music object to an audio file.
Supported formats include WAV, AIFF, FLAC and OGA.
- Parameters
path (str or Path) – Path to write the audio file.
music (
muspy.Music
) – Music object to write.audio_format (str, {'wav', 'aiff', 'flac', 'oga'}, optional) – File format to write. Defaults to infer from the extension.
soundfont_path (str or Path, optional) – Path to the soundfount file. Defaults to the path to the downloaded MuseScore General soundfont.
rate (int, default: 44100) – Sample rate (in samples per sec).
gain (float, optional) – Master gain (-g option) for Fluidsynth. Defaults to 1/n, where n is the number of tracks. This can be used to prevent distortions caused by clipping.
- muspy.write_midi(path, music, backend='mido', **kwargs)[source]
Write a Music object to a MIDI file.
- Parameters
path (str or Path) – Path to write the MIDI file.
music (
muspy.Music
) – Music object to write.backend ({'mido', 'pretty_midi'}, default: 'mido') – Backend to use.
See also
write_midi_mido
Write a Music object to a MIDI file using mido as backend.
write_midi_pretty_midi
Write a Music object to a MIDI file using pretty_midi as backend.
- muspy.write_musicxml(path, music, compressed=None)[source]
Write a Music object to a MusicXML file.
- Parameters
path (str or Path) – Path to write the MusicXML file.
music (
muspy.Music
) – Music object to write.compressed (bool, optional) – Whether to write to a compressed MusicXML file. If None, infer from the extension of the filename (‘.xml’ and ‘.musicxml’ for an uncompressed file, ‘.mxl’ for a compressed file).
- class muspy.NoteRepresentationProcessor(use_start_end=False, encode_velocity=True, dtype=<class 'int'>, default_velocity=64)[source]
Note-based representation processor.
The note-based represetantion represents music as a sequence of (pitch, time, duration, velocity) tuples. For example, a note Note(time=0, duration=4, pitch=60, velocity=64) will be encoded as a tuple (0, 4, 60, 64). The output shape is L * D, where L is th number of notes and D is 4 when encode_velocity is True, otherwise D is 3. The values of the second dimension represent pitch, time, duration and velocity (discarded when encode_velocity is False).
- use_start_end
Whether to use ‘start’ and ‘end’ to encode the timing rather than ‘time’ and ‘duration’.
- Type
bool, default: False
- default_velocity
Default velocity value to use when decoding if encode_velocity is False.
- Type
int, default: 64
- encode(music)[source]
Encode a Music object into note-based representation.
- Parameters
music (
muspy.Music
object) – Music object to encode.- Returns
Encoded array in note-based representation.
- Return type
ndarray (np.uint8)
See also
muspy.to_note_representation()
Convert a Music object into note-based representation.
- decode(array)[source]
Decode note-based representation into a Music object.
- Parameters
array (ndarray) – Array in note-based representation to decode. Cast to integer if not of integer type.
- Returns
Decoded Music object.
- Return type
muspy.Music
object
See also
muspy.from_note_representation()
Return a Music object converted from note-based representation.
- class muspy.EventRepresentationProcessor(use_single_note_off_event=False, use_end_of_sequence_event=False, encode_velocity=False, force_velocity_event=True, max_time_shift=100, velocity_bins=32, default_velocity=64)[source]
Event-based representation processor.
The event-based represetantion represents music as a sequence of events, including note-on, note-off, time-shift and velocity events. The output shape is M x 1, where M is the number of events. The values encode the events. The default configuration uses 0-127 to encode note-one events, 128-255 for note-off events, 256-355 for time-shift events, and 356 to 387 for velocity events.
- use_single_note_off_event
Whether to use a single note-off event for all the pitches. If True, the note-off event will close all active notes, which can lead to lossy conversion for polyphonic music.
- Type
bool, default: False
- use_end_of_sequence_event
Whether to append an end-of-sequence event to the encoded sequence.
- Type
bool, default: False
- force_velocity_event
Whether to add a velocity event before every note-on event. If False, velocity events are only used when the note velocity is changed (i.e., different from the previous one).
- Type
bool, default: True
- max_time_shift
Maximum time shift (in ticks) to be encoded as an separate event. Time shifts larger than max_time_shift will be decomposed into two or more time-shift events.
- Type
int, default: 100
- encode(music)[source]
Encode a Music object into event-based representation.
- Parameters
music (
muspy.Music
object) – Music object to encode.- Returns
Encoded array in event-based representation.
- Return type
ndarray (np.uint16)
See also
muspy.to_event_representation()
Convert a Music object into event-based representation.
- decode(array)[source]
Decode event-based representation into a Music object.
- Parameters
array (ndarray) – Array in event-based representation to decode. Cast to integer if not of integer type.
- Returns
Decoded Music object.
- Return type
muspy.Music
object
See also
muspy.from_event_representation()
Return a Music object converted from event-based representation.
- class muspy.PianoRollRepresentationProcessor(encode_velocity=True, default_velocity=64)[source]
Piano-roll representation processor.
The piano-roll represetantion represents music as a time-pitch matrix, where the columns are the time steps and the rows are the pitches. The values indicate the presence of pitches at different time steps. The output shape is T x 128, where T is the number of time steps.
- encode_velocity
Whether to encode velocities. If True, a binary-valued array will be return. Otherwise, an integer array will be return.
- Type
bool, default: True
- default_velocity
Default velocity value to use when decoding if encode_velocity is False.
- Type
int, default: 64
- encode(music)[source]
Encode a Music object into piano-roll representation.
- Parameters
music (
muspy.Music
object) – Music object to encode.- Returns
Encoded array in piano-roll representation.
- Return type
ndarray (np.uint8)
See also
muspy.to_pianoroll_representation()
Convert a Music object into piano-roll representation.
- decode(array)[source]
Decode piano-roll representation into a Music object.
- Parameters
array (ndarray) – Array in piano-roll representation to decode. Cast to integer if not of integer type. If encode_velocity is True, casted to boolean if not of boolean type.
- Returns
Decoded Music object.
- Return type
muspy.Music
object
See also
muspy.from_pianoroll_representation()
Return a Music object converted from piano-roll representation.
- class muspy.PitchRepresentationProcessor(use_hold_state=False, default_velocity=64)[source]
Pitch-based representation processor.
The pitch-based represetantion represents music as a sequence of pitch, rest and (optional) hold tokens. Only monophonic melodies are compatible with this representation. The output shape is T x 1, where T is the number of time steps. The values indicate whether the current time step is a pitch (0-127), a rest (128) or, optionally, a hold (129).
- encode(music)[source]
Encode a Music object into pitch-based representation.
- Parameters
music (
muspy.Music
object) – Music object to encode.- Returns
Encoded array in pitch-based representation.
- Return type
ndarray (np.uint8)
See also
muspy.to_pitch_representation()
Convert a Music object into pitch-based representation.
- decode(array)[source]
Decode pitch-based representation into a Music object.
- Parameters
array (ndarray) – Array in pitch-based representation to decode. Cast to integer if not of integer type.
- Returns
Decoded Music object.
- Return type
muspy.Music
object
See also
muspy.from_pitch_representation()
Return a Music object converted from pitch-based representation.
- muspy.validate_json(path)[source]
Validate a file against the JSON schema.
- Parameters
path (str or Path) – Path to the file to validate.
- muspy.validate_musicxml(path)[source]
Validate a file against the MusicXML schema.
- Parameters
path (str or Path) – Path to the file to validate.
- muspy.validate_yaml(path)[source]
Validate a file against the YAML schema.
- Parameters
path (str or Path) – Path to the file to validate.
- muspy.show(music, kind, **kwargs)[source]
Show visualization.
- Parameters
music (
muspy.Music
) – Music object to convert.kind ({'piano-roll', 'score'}) – Target representation.
- muspy.show_score(music, figsize=None, clef='treble', clef_octave=0, note_spacing=None, font_path=None, font_scale=None)[source]
Show score visualization.
- Parameters
music (
muspy.Music
) – Music object to show.figsize ((float, float), optional) – Width and height in inches. Defaults to Matplotlib configuration.
clef ({'treble', 'alto', 'bass'}, default: 'treble') – Clef type.
clef_octave (int, default: 0) – Clef octave.
note_spacing (int, default: 4) – Spacing of notes.
font_path (str or Path, optional) – Path to the music font. Defaults to the path to the downloaded Bravura font.
font_scale (float, default: 140) – Font scaling factor for finetuning. The default value of 140 is optimized for the default Bravura font.
- Returns
A ScorePlotter object that handles the score.
- Return type
- class muspy.ScorePlotter(fig, ax, resolution, note_spacing=None, font_path=None, font_scale=None)[source]
A plotter that handles the score visualization.
- fig
Figure object to plot the score on.
- axes
Axes object to plot the score on.
- Type
- font_path
Path to the music font. Defaults to the path to the downloaded Bravura font.
- Type
str or Path, optional