The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: TypeError
Message: Expected a dict or a list but got <class 'NoneType'>: None
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2031, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2027, in from_yaml_inner
return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2005, in from_yaml_inner
_feature = from_yaml_inner(unsimplify(obj).pop(_type))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2029, in from_yaml_inner
raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
TypeError: Expected a dict or a list but got <class 'NoneType'>: NoneNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Blended Skill Talk
Dataset Summary
This dataset contains conversations between two personas with additional context previous utterances free messages guided messages suggestions and guided chosen suggestions allowing for the creation of natural multi-modal conversations with personality empathy and knowledge
The conversations are designed to measure a full range of technical competencies such as dialogue flow management including response times topic control and coherence of conversation It also provides a basis for exploring the impact of different conversational styles on user engagement Additionally the dataset is useful for validating distributed dialogue systems across various modalities while revealing potential biases present in different contexts Finally it enables benchmarking against similar datasets toward the development of an automatic evaluation system for assessing tactical skill talk performance over time
Data Structure
Fields
| Field | Description |
|---|---|
| personas | List of personas participating in the conversation |
| additional_context | Extra context or scenario description |
| previous_utterance | The immediately preceding dialogue turn(s) |
| context | General conversation context |
| free_messages | Free-form user or system messages |
| guided_messages | Messages generated via guided prompts |
| suggestions | Suggested responses from ConvAI2 Empathetic Dialogues and Wizard of Wikipedia |
| guided_chosen_suggestions | Selected suggestions actually used in the conversation |
| label_candidates | Optional candidate labels null in this dataset |
Splits
| Split | Examples | Size (bytes) | Description |
|---|---|---|---|
| Train | 4096 | 9201244 | Used for model training |
| Validation | 723 | 1629426 | Used for validation and tuning |
Total dataset size: 10830670 bytes
Total number of dialogues: 4819
Usage Example
from datasets import load_dataset
ds = load_dataset("anezatra/blended-skill-talk", split="train")
print(ds[0])
References
Smith, E. M., Williamson, M., Shuster, K., Weston, J., & Boureau, Y. L. (2020). Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills. arXiv preprint arXiv:2004.08449. (https://arxiv.org/abs/2004.08449)
- Downloads last month
- 75