Writer for NSDF file format.
Bases: object
Writer for NSDF files.
An NSDF file has three main groups: /model, /data and /map.
str
File open mode. Defaults to append (‘a’). Can be ‘w’ or ‘w+’ also.
nsdf.dialect member
ONED for storing nonuniformly sampled and event data in 1D arrays.
VLEN for storing such data in 2D VLEN datasets.
NANPADDED for storing such data in 2D homogeneous datasets with NaN padding.
h5.Group
/model group
h5.Group
/data group
h5.Group
/map group
h5.Group
/map/time group contains the sampling time points as dimension scales of data. It is mainly used for nonuniformly sampled data.
(h5.Group): ‘/model/modeltree group can be used for storing the model in a hierarchical manner. Each subgroup under modeltree is a model component and can contain other subgroups representing subcomponents. Each group stores the unique identifier of the model component it represents in the string attribute uid.
Add event time data when data from each source is in a separate 1D dataset.
For a population of sources called {population}, a group /map/event/{population} must be first created (using add_event_ds). This is passed as source_ds argument.
When adding the data, the uid of the sources and the names for the corresponding datasets must be specified in source_name_dict and this function will create one dataset for each source under /data/event/{population}/{name} where {name} is the name of the data_object, preferably the field name.
Parameters: |
|
---|---|
Returns: | dict mapping source ids to datasets. |
Create a group under /map/event with name name to store mapping between the datasources and event data.
Parameters: |
|
---|---|
Returns: | The HDF5 Group /map/event/{name}. |
Create a group under /map/event with name name to store mapping between the datasources and event data.
Parameters: |
|
---|---|
Returns: | The HDF5 Dataset /map/event/{popname}/{varname}. |
Add event data when data from all sources in a population is stored in a 2D array with NaN padding.
Parameters: |
|
---|---|
Returns: | HDF5 Dataset containing the data. |
Add event data when data from all sources in a population is stored in a 2D ragged array.
When adding the data, the uid of the sources and the names for the corresponding datasets must be specified and this function will create the dataset /data/event/{population}/{name} where {name} is name of the data_object, preferably the name of the field being recorded.
Parameters: |
|
---|---|
Returns: | HDF5 Dataset containing the data. |
Notes
Concatenating old data with new data and reassigning is a poor choice for saving data incrementally. HDF5 does not seem to support appending data to VLEN datasets.
h5py does not support vlen datasets with float64 elements. Change dtype to np.float64 once that is developed.
Add the files and directories listed in filenames to /model/filecontents.
This function is for storing the contents of model files in the NSDF file. In case of external formats like NeuroML, NineML, SBML and NEURON/GENESIS scripts, this function is useful. Each directory is stored as a group and each file is stored as a dataset.
Parameters: |
|
---|
Add an entire model tree. This will cause the modeltree rooted at root to be written to the NSDF file.
Parameters: |
|
---|
Add nonuniform data when data from each source is in a separate 1D dataset.
For a population of sources called {population}, a group /map/nonuniform/{population} must be first created (using add_nonuniform_ds). This is passed as source_ds argument.
When adding the data, the uid of the sources and the names for the corresponding datasets must be specified and this function will create one dataset for each source under /data/nonuniform/{population}/{name} where {name} is the name of the data_object, preferably the name of the field being recorded.
This function can be used when different sources in a population are sampled at different time points for a field value. Such case may arise when each member of the population is simulated using a variable timestep method like CVODE and this timestep is not global.
Parameters: |
|
---|---|
Returns: | dict mapping source ids to the tuple (dataset, time). |
Raises: | AssertionError when dialect is not ONED. – |
Add the sources listed in idlist under /map/nonuniform/{popname}.
Parameters: |
|
---|---|
Returns: | An HDF5 Dataset storing the source ids when dialect is VLEN or NANPADDED. This is converted into a dimension scale when actual data is added. |
Raises: | AssertionError if idlist is empty or dialect is ONED. – |
Add the sources listed in idlist under /map/nonuniform/{popname}/{varname}.
In case of 1D datasets, for each variable we store the mapping from source id to dataset ref in a two column compund dataset with dtype=[(‘source’, VLENSTR), (‘data’, REFTYPE)]
Parameters: |
|
---|---|
Returns: | An HDF5 Dataset storing the source ids in source column. |
Raises: | AssertionError if idlist is empty or if dialect is not ONED. – |
Add nonuniform data when data from all sources in a population is stored in a 2D array with NaN padding.
Parameters: |
|
---|---|
Returns: | HDF5 Dataset containing the data. |
Notes
Concatenating old data with new data and reassigning is a poor choice for saving data incrementally. HDF5 does not seem to support appending data to VLEN datasets.
h5py does not support vlen datasets with float64 elements. Change dtype to np.float64 once that is developed.
Append nonuniformly sampled variable values from sources to data. In this case sampling times of all the sources are same and the data is stored in a 2D dataset.
Parameters: |
|
---|---|
Returns: | HDF5 dataset storing the data |
Raises: |
|
Add nonuniform data when data from all sources in a population is stored in a 2D ragged array.
When adding the data, the uid of the sources and the names for the corresponding datasets must be specified and this function will create the dataset /data/nonuniform/{population}/{name} where {name} is the first argument, preferably the name of the field being recorded.
This function can be used when different sources in a population are sampled at different time points for a field value. Such case may arise when each member of the population is simulated using a variable timestep method like CVODE and this timestep is not global.
Parameters: |
|
---|---|
Returns: | tuple containing HDF5 Datasets for the data and sampling times. |
Concatenating old data with new data and reassigning is a poor choice. waiting for response from h5py mailing list about appending data to rows of vlen datasets. If that is not possible, vlen dataset is a technically poor choice.
h5py does not support vlen datasets with float64 elements. Change dtype to np.float64 once that is developed.
Append static data variable values from sources to data.
Parameters: | source_ds (HDF5 Dataset) –
fixed (bool): if True, the data cannot grow. Default: True |
---|---|
Returns: | HDF5 dataset storing the data |
Raises: |
|
Add the sources listed in idlist under /map/static.
Parameters: |
|
---|---|
Returns: | An HDF5 Dataset storing the source ids. This is converted into a dimension scale when actual data is added. |
Append uniformly sampled variable values from sources to data.
Parameters: |
|
---|---|
Returns: | HDF5 dataset storing the data |
Raises: |
|
Add the sources listed in idlist under /map/uniform.
Parameters: |
|
---|---|
Returns: | An HDF5 Dataset storing the source ids. This is converted into a dimension scale when actual data is added. |
Set the file attributes (environments).
Parameters: | properties (dict) – mapping property names to values. It must contain the following keyes: title (str) creator (list of str) software (list of str) method (list of str) description (str) rights (str) tstart (datetime.datetime) tend (datetime.datetime) contributor (list of str) |
---|---|
Raises: | KeyError if not all environment properties are specified in the dict. – |
Add a model component as a group under parentgroup.
This creates a group component.name under parent group if not already present. The uid of the component is stored in the uid attribute of the group. Key-value pairs in the component.attrs dict are stored as attributes of the group.
Parameters: |
|
---|---|
Returns: | HDF Group created for this model component. |
Raises: |
|
Match entries in hdfds with those in pydata. Returns true if the two sets are equal. False otherwise.
Add a dataset name under group and store the contents of text file fname in it.
Add a dataset name under group and store the contents of binary file fname in it.
Walk the directory tree rooted at root_dir and replicate it under root_group in HDF5 file.
This is a helper function for copying model directory structure and file contents into an hdf5 file. If ascii=True all files are considered ascii text else all files are taken as binary blob.
Parameters: |
|
---|