Tokenizer, POS-Tagger and Dependency-Parser for Classical Chinese Texts (ๆผขๆ/ๆ่จๆ) with spaCy, Transformers and SuPar.
>>> import suparkanbun
>>> nlp=suparkanbun.load()
>>> doc=nlp("ไธๅ
ฅ่็ฉดไธๅพ่ๅญ")
>>> print(type(doc))
<class 'spacy.tokens.doc.Doc'>
>>> print(suparkanbun.to_conllu(doc))
# text = ไธๅ
ฅ่็ฉดไธๅพ่ๅญ
1 ไธ ไธ ADV v,ๅฏ่ฉ,ๅฆๅฎ,็ก็ Polarity=Neg 2 advmod _ Gloss=not|SpaceAfter=No
2 ๅ
ฅ ๅ
ฅ VERB v,ๅ่ฉ,่ก็บ,็งปๅ _ 0 root _ Gloss=enter|SpaceAfter=No
3 ่ ่ NOUN n,ๅ่ฉ,ไธปไฝ,ๅ็ฉ _ 4 nmod _ Gloss=tiger|SpaceAfter=No
4 ็ฉด ็ฉด NOUN n,ๅ่ฉ,ๅบๅฎ็ฉ,ๅฐๅฝข Case=Loc 2 obj _ Gloss=cave|SpaceAfter=No
5 ไธ ไธ ADV v,ๅฏ่ฉ,ๅฆๅฎ,็ก็ Polarity=Neg 6 advmod _ Gloss=not|SpaceAfter=No
6 ๅพ ๅพ VERB v,ๅ่ฉ,่ก็บ,ๅพๅคฑ _ 2 parataxis _ Gloss=get|SpaceAfter=No
7 ่ ่ NOUN n,ๅ่ฉ,ไธปไฝ,ๅ็ฉ _ 8 nmod _ Gloss=tiger|SpaceAfter=No
8 ๅญ ๅญ NOUN n,ๅ่ฉ,ไบบ,้ขไฟ _ 6 obj _ Gloss=child|SpaceAfter=No
>>> import deplacy
>>> deplacy.render(doc)
ไธ ADV <โโโโโ advmod
ๅ
ฅ VERB โโโโโโโโ ROOT
่ NOUN <โ โ โ nmod
็ฉด NOUN โโ<โ โ obj
ไธ ADV <โโโโโ โ advmod
ๅพ VERB โโโโโโ<โ parataxis
่ NOUN <โ โ nmod
ๅญ NOUN โโ<โ obj
suparkanbun.load()
has two options suparkanbun.load(BERT="roberta-classical-chinese-base-char",Danku=False)
. With the option Danku=True
the pipeline tries to segment sentences automatically. Available BERT
options are:
-
BERT="roberta-classical-chinese-base-char"
utilizes roberta-classical-chinese-base-char (default) -
BERT="roberta-classical-chinese-large-char"
utilizes roberta-classical-chinese-large-char -
BERT="guwenbert-base"
utilizes GuwenBERT-base -
BERT="guwenbert-large"
utilizes GuwenBERT-large -
BERT="sikubert"
utilizes SikuBERT -
BERT="sikuroberta"
utilizes SikuRoBERTa
pip3 install suparkanbun --user
Make sure to get python37-devel
python37-pip
python37-cython
python37-numpy
python37-wheel
gcc-g++
mingw64-x86_64-gcc-g++
git
curl
make
cmake
packages, and then:
curl -L https://raw.githubusercontent.com/KoichiYasuoka/CygTorch/master/installer/supar.sh | sh
pip3.7 install suparkanbun
!pip install suparkanbun
Try notebook for Google Colaboratory.
Koichi Yasuoka (ๅฎๅฒกๅญไธ)
Koichi Yasuoka, Christian Wittern, Tomohiko Morioka, Takumi Ikeda, Naoki Yamazaki, Yoshihiro Nikaido, Shingo Suzuki, Shigeki Moro, Kazunori Fujita: Designing Universal Dependencies for Classical Chinese and Its Application, Journal of Information Processing Society of Japan, Vol.63, No.2 (February 2022), pp.355-363.