Textbook
Universal NLU/NLI Dataset Processing Framework
It is designed with BERT
in mind and currently support seven commonsense reasoning datsets(alphanli
, hellaswag
, physicaliqa
, socialiqa
, codah
, cosmosqa
, and commonsenseqa
). It can be also applied to other datasets with few line of codes.
Architecture
Dependency
conda install av -c conda-forge
pip install -r requirements.txt
pip install --editable .
# or
pip install textbook
Download raw datasets
./fetch.sh
It downloads alphanli
, hellaswag
, physicaliqa
, socialiqa
, codah
, cosmosqa
, and commonsenseqa
from AWS in data_cache
.
In case you want to use something-something, pelase download the dataset from 20bn's website.
Usage
1. Load a dataset with parallel pandas
from transformers import BertTokenizer
from textbook import *
import modin.pandas as pd
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
d1 = MultiModalDataset(
df=pd.read_json("data_cache/alphanli/train.jsonl", lines=True),
template=lambda x: template_anli(x, LABEL2INT['anli']),
renderers=[lambda x: renderer_text(x, tokenizer)],
)
bt1 = BatchTool(tokenizer, source="anli")
i1 = DataLoader(d1, batch_sampler=TokenBasedSampler(d1, batch_size=64), collate_fn=bt1.collate_fn)
2. Create a multitask dataset with multiple datasets
from transformers import BertTokenizer
from textbook import *
import modin.pandas as pd
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
# add additional tokens for each task as special `cls_token`
tokenizer.add_special_tokens({"additional_special_tokens": [
"[ANLI]", "[HELLASWAG]"
]})
d1 = MultiModalDataset(
df=pd.read_json("data_cache/alphanli/train.jsonl", lines=True),
template=lambda x: template_anli(x, LABEL2INT['anli']),
renderers=[lambda x: renderer_text(x, tokenizer, "[ANLI]")],
)
bt1 = BatchTool(tokenizer, source="anli")
i1 = DataLoader(d1, batch_sampler=TokenBasedSampler(d1, batch_size=64), collate_fn=bt1.collate_fn)
d2 = MultiModalDataset(
df=pd.read_json("data_cache/hellaswag/train.jsonl", lines=True),
template=lambda x: template_hellaswag(x, LABEL2INT['hellaswag']),
renderers=[lambda x: renderer_text(x, tokenizer, "[HELLASWAG]")],
)
bt2 = BatchTool(tokenizer, source="hellaswag")
i2 = DataLoader(d2, batch_sampler=TokenBasedSampler(d1, batch_size=64), collate_fn=bt2.collate_fn)
d = MultiTaskDataset([i1, i2], shuffle=False)
#! batch size must be 1 for multitaskdataset, because we already batched in each sub dataset.
for batch in DataLoader(d, batch_size=1, collate_fn=BatchTool.uncollate_fn):
pass
# {
# "source": "anli" or "hellaswag",
# "labels": ...,
# "input_ids": ...,
# "attentions": ...,
# "token_type_ids": ...,
# "images": ...,
# }