Bangla Unicode Normalization for word normalization
pip install bnunicodenormalizer
initialization and cleaning
# import
from bnunicodenormalizer import Normalizer
from pprint import pprint
# initialize
bnorm=Normalizer()
# normalize
word = 'āĻžāĻā§āĻŦāĻžāĻā§'
result=bnorm(word)
print(f"Non-norm:{word}; Norm:{result['normalized']}")
print("--------------------------------------------------")
pprint(result)
output
Non-norm:āĻžāĻā§āĻŦāĻžāĻā§; Norm:āĻā§āĻŦāĻžāĻā§
--------------------------------------------------
{'given': 'āĻžāĻā§āĻŦāĻžāĻā§',
'normalized': 'āĻā§āĻŦāĻžāĻā§',
'ops': [{'after': 'āĻā§āĻŦāĻžāĻā§',
'before': 'āĻžāĻā§āĻŦāĻžāĻā§',
'operation': 'InvalidUnicode'}]}
call to the normalizer returns a dictionary in the following format
-
given
= provided text -
normalized
= normalized text (gives None if during the operation length of the text becomes 0) -
ops
= list of operations (dictionary) that were executed in given text to create normalized text - each dictionary in ops has:
-
operation
: the name of the operation / problem in given text -
before
: what the text looked like before the specific operation -
after
: what the text looks like after the specific operation
-
allow to use english text
# initialize without english (default)
norm=Normalizer()
print("without english:",norm("ASD123")["normalized"])
# --> returns None
norm=Normalizer(allow_english=True)
print("with english:",norm("ASD123")["normalized"])
output
without english: None
with english: ASD123
'''
initialize a normalizer
args:
allow_english : allow english letters numbers and punctuations [default:False]
keep_legacy_symbols : legacy symbols will be considered as valid unicodes[default:False]
'ā§ē':Isshar
'ā§ģ':Ganda
'āĻ':Anji (not 'ā§')
'āĻ':li
'ā§Ą':dirgho li
'āĻŊ':Avagraha
'ā§ ':Vocalic Rr (not 'āĻ')
'ā§˛':rupi
'ā§´':currency numerator 1
'ā§ĩ':currency numerator 2
'ā§ļ':currency numerator 3
'ā§ˇ':currency numerator 4
'ā§¸':currency numerator one less than the denominator
'ā§š':Currency Denominator Sixteen
legacy_maps : a dictionay for changing legacy symbols into a more used unicode
a default legacy map is included in the language class as well,
legacy_maps={'āĻ':'ā§',
'āĻ':'ā§¯',
'ā§Ą':'ā§¯',
'ā§ĩ':'ā§¯',
'ā§ģ':'ā§',
'ā§ ':'āĻ',
'āĻŊ':'āĻ'}
pass-
* legacy_maps=None; for keeping the legacy symbols as they are
* legacy_maps="default"; for using the default legacy map
* legacy_maps=custom dictionary(type-dict) ; which will map your desired legacy symbol to any of symbol you want
* the keys in the custiom dicts must belong to any of the legacy symbols
* the values in the custiom dicts must belong to either vowels,consonants,numbers or diacritics
vowels = ['āĻ
', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ']
consonants = ['āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ', 'āĻ','āĻ', 'āĻ', 'āĻ',
'āĻ', 'āĻ ', 'āĻĄ', 'āĻĸ', 'āĻŖ', 'āĻ¤', 'āĻĨ', 'āĻĻ', 'āĻ§', 'āĻ¨',
'āĻĒ', 'āĻĢ', 'āĻŦ', 'āĻ', 'āĻŽ', 'āĻ¯', 'āĻ°', 'āĻ˛', 'āĻļ', 'āĻˇ',
'āĻ¸', 'āĻš','ā§', 'ā§', 'ā§','ā§']
numbers = ['ā§Ļ', 'ā§§', 'ā§¨', 'ā§Š', 'ā§Ē', 'ā§Ģ', 'ā§Ŧ', 'ā§', 'ā§Ž', 'ā§¯']
vowel_diacritics = ['āĻž', 'āĻŋ', 'ā§', 'ā§', 'ā§', 'ā§', 'ā§', 'ā§', 'ā§', 'ā§']
consonant_diacritics = ['āĻ', 'āĻ', 'āĻ']
> for example you may want to map 'āĻŊ':Avagraha as 'āĻš' based on visual similiarity
(default:'āĻ')
** legacy contions: keep_legacy_symbols and legacy_maps operates as follows
case-1) keep_legacy_symbols=True and legacy_maps=None
: all legacy symbols will be considered valid unicodes. None of them will be changed
case-2) keep_legacy_symbols=True and legacy_maps=valid dictionary example:{'āĻ':'āĻ'}
: all legacy symbols will be considered valid unicodes. Only 'āĻ' will be changed to 'āĻ' , others will be untouched
case-3) keep_legacy_symbols=False and legacy_maps=None
: all legacy symbols will be removed
case-4) keep_legacy_symbols=False and legacy_maps=valid dictionary example:{'āĻŊ':'āĻ','ā§ ':'āĻ'}
: 'āĻŊ' will be changed to 'āĻ' and 'ā§ ' will be changed to 'āĻ'. All other legacy symbols will be removed
'''
my_legacy_maps={'āĻ':'āĻ',
'ā§Ą':'āĻ',
'ā§ĩ':'āĻ',
'ā§ ':'āĻ',
'āĻŊ':'āĻ'}
text="ā§ē,ā§ģ,āĻ,āĻ,ā§Ą,āĻŊ,ā§ ,ā§˛,ā§´,ā§ĩ,ā§ļ,ā§ˇ,ā§¸,ā§š"
# case 1
norm=Normalizer(keep_legacy_symbols=True,legacy_maps=None)
print("case-1 normalized text: ",norm(text)["normalized"])
# case 2
norm=Normalizer(keep_legacy_symbols=True,legacy_maps=my_legacy_maps)
print("case-2 normalized text: ",norm(text)["normalized"])
# case 2-defalut
norm=Normalizer(keep_legacy_symbols=True)
print("case-2 default normalized text: ",norm(text)["normalized"])
# case 3
norm=Normalizer(keep_legacy_symbols=False,legacy_maps=None)
print("case-3 normalized text: ",norm(text)["normalized"])
# case 4
norm=Normalizer(keep_legacy_symbols=False,legacy_maps=my_legacy_maps)
print("case-4 normalized text: ",norm(text)["normalized"])
# case 4-defalut
norm=Normalizer(keep_legacy_symbols=False)
print("case-4 default normalized text: ",norm(text)["normalized"])
output
case-1 normalized text: ā§ē,ā§ģ,āĻ,āĻ,ā§Ą,āĻŊ,ā§ ,ā§˛,ā§´,ā§ĩ,ā§ļ,ā§ˇ,ā§¸,ā§š
case-2 normalized text: ā§ē,ā§ģ,āĻ,āĻ,āĻ,āĻ,āĻ,ā§˛,ā§´,āĻ,ā§ļ,ā§ˇ,ā§¸,ā§š
case-2 default normalized text: ā§ē,ā§ģ,āĻ,āĻ,ā§Ą,āĻŊ,ā§ ,ā§˛,ā§´,ā§ĩ,ā§ļ,ā§ˇ,ā§¸,ā§š
case-3 normalized text: ,,,,,,,,,,,,,
case-4 normalized text: ,,,āĻ,āĻ,āĻ,āĻ,,,āĻ,,,,
case-4 default normalized text: ,,,,,,,,,,,,,
- base operations available for all indic languages:
self.word_level_ops={"LegacySymbols" :self.mapLegacySymbols,
"BrokenDiacritics" :self.fixBrokenDiacritics}
self.decomp_level_ops={"BrokenNukta" :self.fixBrokenNukta,
"InvalidUnicode" :self.cleanInvalidUnicodes,
"InvalidConnector" :self.cleanInvalidConnector,
"FixDiacritics" :self.cleanDiacritics,
"VowelDiacriticAfterVowel" :self.cleanVowelDiacriticComingAfterVowel}
- extensions for bangla
self.decomp_level_ops["ToAndHosontoNormalize"] = self.normalizeToandHosonto
# invalid folas
self.decomp_level_ops["NormalizeConjunctsDiacritics"] = self.cleanInvalidConjunctDiacritics
# complex root cleanup
self.decomp_level_ops["ComplexRootNormalization"] = self.convertComplexRoots
In all examples (a) is the non-normalized form and (b) is the normalized form
- Broken diacritics:
# Example-1:
(a)'āĻāĻ°ā§āĻž'==(b)'āĻāĻ°ā§' -> False
(a) breaks as:['āĻ', 'āĻ°', 'ā§', 'āĻž']
(b) breaks as:['āĻ', 'āĻ°', 'ā§']
# Example-2:
(a)āĻĒā§ā§āĻāĻā§==(b)āĻĒā§āĻāĻā§ -> False
(a) breaks as:['āĻĒ', 'ā§', 'ā§', 'āĻ', 'āĻ', 'ā§']
(b) breaks as:['āĻĒ', 'ā§', 'āĻ', 'āĻ', 'ā§']
# Example-3:
(a)āĻ¸āĻāĻ¸ā§āĻā§āĻ¤āĻŋ==(b)āĻ¸āĻāĻ¸ā§āĻā§āĻ¤āĻŋ -> False
(a) breaks as:['āĻ¸', 'āĻ', 'āĻ¸', 'ā§', 'āĻ', 'ā§', 'āĻ¤', 'āĻŋ']
(b) breaks as:['āĻ¸', 'āĻ', 'āĻ¸', 'ā§', 'āĻ', 'ā§', 'āĻ¤', 'āĻŋ']
- Nukta Normalization:
Example-1:
(a)āĻā§āĻ¨ā§āĻĻā§āĻ°ā§āĻ¯āĻŧ==(b)āĻā§āĻ¨ā§āĻĻā§āĻ°ā§ā§ -> False
(a) breaks as:['āĻ', 'ā§', 'āĻ¨', 'ā§', 'āĻĻ', 'ā§', 'āĻ°', 'ā§', 'āĻ¯', 'āĻŧ']
(b) breaks as:['āĻ', 'ā§', 'āĻ¨', 'ā§', 'āĻĻ', 'ā§', 'āĻ°', 'ā§', 'ā§']
Example-2:
(a)āĻ°āĻ¯ā§āĻŧāĻā§==(b)āĻ°ā§ā§āĻā§ -> False
(a) breaks as:['āĻ°', 'āĻ¯', 'ā§', 'āĻŧ', 'āĻ', 'ā§']
(b) breaks as:['āĻ°', 'ā§', 'ā§', 'āĻ', 'ā§']
Example-3:
(a)āĻāĻŧāĻ¨ā§āĻ¯==(b)āĻāĻ¨ā§āĻ¯ -> False
(a) breaks as:['āĻ', 'āĻŧ', 'āĻ¨', 'ā§', 'āĻ¯']
(b) breaks as:['āĻ', 'āĻ¨', 'ā§', 'āĻ¯']
- Invalid hosonto
# Example-1:
(a)āĻĻā§āĻā§āĻāĻŋ==(b)āĻĻā§āĻāĻāĻŋ-->False
(a) breaks as ['āĻĻ', 'ā§', 'āĻ', 'ā§', 'āĻ', 'āĻŋ']
(b) breaks as ['āĻĻ', 'ā§', 'āĻ', 'āĻ', 'āĻŋ']
# Example-2:
(a)āĻā§āĻ¤ā§==(b)āĻāĻ¤ā§-->False
(a) breaks as ['āĻ', 'ā§', 'āĻ¤', 'ā§']
(b) breaks as ['āĻ', 'āĻ¤', 'ā§']
# Example-3:
(a)āĻ¨ā§āĻā§āĻā§āĻžāĻ°ā§āĻ==(b)āĻ¨ā§āĻāĻā§āĻžāĻ°ā§āĻ-->False
(a) breaks as ['āĻ¨', 'ā§', 'āĻ', 'ā§', 'āĻ', 'ā§', 'āĻž', 'āĻ°', 'ā§', 'āĻ']
(b) breaks as ['āĻ¨', 'ā§', 'āĻ', 'āĻ', 'ā§', 'āĻž', 'āĻ°', 'ā§', 'āĻ']
# Example-4:
(a)āĻāĻ¸ā§āĻāĻ==(b)āĻāĻ¸āĻāĻ-->False
(a) breaks as ['āĻ', 'āĻ¸', 'ā§', 'āĻ', 'āĻ']
(b) breaks as ['āĻ', 'āĻ¸', 'āĻ', 'āĻ']
# Example-5:
(a)'āĻā§ā§āĻā§āĻ¤āĻŋ'==(b)'āĻā§āĻā§āĻ¤āĻŋ' -> False
(a) breaks as:['āĻ', 'ā§', 'ā§', 'āĻ', 'ā§', 'āĻ¤', 'āĻŋ']
(b) breaks as:['āĻ', 'ā§','āĻ', 'ā§', 'āĻ¤', 'āĻŋ']
# Example-6:
(a)'āĻ¯ā§ā§āĻā§āĻ¤'==(b)'āĻ¯ā§āĻā§āĻ¤' -> False
(a) breaks as:['āĻ¯', 'ā§', 'ā§', 'āĻ', 'ā§', 'āĻ¤']
(b) breaks as:['āĻ¯', 'ā§', 'āĻ', 'ā§', 'āĻ¤']
# Example-7:
(a)'āĻāĻŋāĻā§ā§āĻ'==(b)'āĻāĻŋāĻā§āĻ' -> False
(a) breaks as:['āĻ', 'āĻŋ', 'āĻ', 'ā§', 'ā§', 'āĻ']
(b) breaks as:['āĻ', 'āĻŋ', 'āĻ', 'ā§','āĻ']
- To+hosonto:
# Example-1:
(a)āĻŦā§āĻ¤ā§āĻĒāĻ¤ā§āĻ¤āĻŋ==(b)āĻŦā§ā§āĻĒāĻ¤ā§āĻ¤āĻŋ-->False
(a) breaks as ['āĻŦ', 'ā§', 'āĻ¤', 'ā§', 'āĻĒ', 'āĻ¤', 'ā§', 'āĻ¤', 'āĻŋ']
(b) breaks as ['āĻŦ', 'ā§', 'ā§', 'āĻĒ', 'āĻ¤', 'ā§', 'āĻ¤', 'āĻŋ']
# Example-2:
(a)āĻāĻ¤ā§āĻ¸==(b)āĻā§āĻ¸-->False
(a) breaks as ['āĻ', 'āĻ¤', 'ā§', 'āĻ¸']
(b) breaks as ['āĻ', 'ā§', 'āĻ¸']
- Unwanted doubles(consecutive doubles):
# Example-1:
(a)'āĻ¯ā§ā§āĻĻā§āĻ§'==(b)'āĻ¯ā§āĻĻā§āĻ§' -> False
(a) breaks as:['āĻ¯', 'ā§', 'ā§', 'āĻĻ', 'ā§', 'āĻ§']
(b) breaks as:['āĻ¯', 'ā§', 'āĻĻ', 'ā§', 'āĻ§']
# Example-2:
(a)'āĻĻā§ā§āĻ'==(b)'āĻĻā§āĻ' -> False
(a) breaks as:['āĻĻ', 'ā§', 'ā§', 'āĻ']
(b) breaks as:['āĻĻ', 'ā§', 'āĻ']
# Example-3:
(a)'āĻĒā§āĻ°āĻā§ā§āĻ¤āĻŋāĻ°'==(b)'āĻĒā§āĻ°āĻā§āĻ¤āĻŋāĻ°' -> False
(a) breaks as:['āĻĒ', 'ā§', 'āĻ°', 'āĻ', 'ā§', 'ā§', 'āĻ¤', 'āĻŋ', 'āĻ°']
(b) breaks as:['āĻĒ', 'ā§', 'āĻ°', 'āĻ', 'ā§', 'āĻ¤', 'āĻŋ', 'āĻ°']
# Example-4:
(a)āĻāĻŽāĻžāĻā§āĻžāĻž==(b)'āĻāĻŽāĻžāĻā§'-> False
(a) breaks as:['āĻ', 'āĻŽ', 'āĻž', 'āĻ', 'ā§', 'āĻž', 'āĻž']
(b) breaks as:['āĻ', 'āĻŽ', 'āĻž', 'āĻ', 'ā§']
- Vowwels and modifier followed by vowel diacritics:
# Example-1:
(a)āĻā§āĻ˛ā§==(b)āĻāĻ˛ā§-->False
(a) breaks as ['āĻ', 'ā§', 'āĻ˛', 'ā§']
(b) breaks as ['āĻ', 'āĻ˛', 'ā§']
# Example-2:
(a)āĻāĻ°ā§āĻāĻŋāĻā§āĻ˛āĻāĻŋ==(b)āĻāĻ°ā§āĻāĻŋāĻāĻ˛āĻāĻŋ-->False
(a) breaks as ['āĻ', 'āĻ°', 'ā§', 'āĻ', 'āĻŋ', 'āĻ', 'ā§', 'āĻ˛', 'āĻ', 'āĻŋ']
(b) breaks as ['āĻ', 'āĻ°', 'ā§', 'āĻ', 'āĻŋ', 'āĻ', 'āĻ˛', 'āĻ', 'āĻŋ']
# Example-3:
(a)āĻāĻāĻā§==(b)āĻāĻāĻ¤ā§āĻ°ā§-->False
(a) breaks as ['āĻ', 'āĻ', 'āĻ', 'ā§']
(b) breaks as ['āĻ', 'āĻ', 'āĻ¤', 'ā§', 'āĻ°', 'ā§']
- Repeated folas:
# Example-1:
(a)āĻā§āĻ°ā§āĻ°āĻžāĻŽāĻā§==(b)āĻā§āĻ°āĻžāĻŽāĻā§-->False
(a) breaks as ['āĻ', 'ā§', 'āĻ°', 'ā§', 'āĻ°', 'āĻž', 'āĻŽ', 'āĻ', 'ā§']
(b) breaks as ['āĻ', 'ā§', 'āĻ°', 'āĻž', 'āĻŽ', 'āĻ', 'ā§']
The normalization is purely based on how bangla text is used in Bangladesh
(bn:bd). It does not necesserily cover every variation of textual content available at other regions
- clone the repository
- change working directory to
tests
- run:
python3 -m unittest test_normalizer.py
-
for reporting an issue please provide the specific information
- invalid text
- expected valid text
- why is the output expected
- clone the repository
- add a test case in tests/test_normalizer.py after line no:91
# Dummy Non-Bangla,Numbers and Space cases/ Invalid start end cases # english self.assertEqual(norm('ASD1234')["normalized"],None) self.assertEqual(ennorm('ASD1234')["normalized"],'ASD1234') # random self.assertEqual(norm('āĻŋāĻ¤')["normalized"],'āĻ¤') self.assertEqual(norm('āĻ¸āĻā§āĻ¯ā§āĻā§āĻ¤āĻŋ')["normalized"],"āĻ¸āĻāĻ¯ā§āĻā§āĻ¤āĻŋ") # Ending self.assertEqual(norm("āĻ āĻāĻžāĻ¨āĻžā§")["normalized"],"āĻ āĻāĻžāĻ¨āĻž") #--------------------------------------------- insert your assertions here---------------------------------------- ''' ### case: give a comment about your case ## (a) invalid text==(b) valid text <---- an example of your case self.assertEqual(norm(invalid text)["normalized"],expected output) or self.assertEqual(ennorm(invalid text)["normalized"],expected output) <----- for including english text ''' # your case goes here-
- perform the unit testing
- make sure the unit test fails under true conditions
- to use indic language normalizer for 'devanagari', 'gujarati', 'odiya', 'tamil', 'panjabi', 'malayalam','sylhetinagri'
from bnunicodenormalizer import IndicNormalizer
norm=IndicNormalizer('devanagari')
- initialization
'''
initialize a normalizer
args:
language : language identifier from 'devanagari', 'gujarati', 'odiya', 'tamil', 'panjabi', 'malayalam','sylhetinagri'
allow_english : allow english letters numbers and punctuations [default:False]
'''
- Authors: Bengali.AI in association with OCR Team , APSIS Solutions Limited
- Cite Our Work
@inproceedings{ansary-etal-2024-unicode-normalization,
title = "{U}nicode Normalization and Grapheme Parsing of {I}ndic Languages",
author = "Ansary, Nazmuddoha and
Adib, Quazi Adibur Rahman and
Reasat, Tahsin and
Sushmit, Asif Shahriyar and
Humayun, Ahmed Imtiaz and
Mehnaz, Sazia and
Fatema, Kanij and
Rashid, Mohammad Mamun Or and
Sadeque, Farig",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.1479",
pages = "17019--17030",
abstract = "Writing systems of Indic languages have orthographic syllables, also known as complex graphemes, as unique horizontal units. A prominent feature of these languages is these complex grapheme units that comprise consonants/consonant conjuncts, vowel diacritics, and consonant diacritics, which, together make a unique Language. Unicode-based writing schemes of these languages often disregard this feature of these languages and encode words as linear sequences of Unicode characters using an intricate scheme of connector characters and font interpreters. Due to this way of using a few dozen Unicode glyphs to write thousands of different unique glyphs (complex graphemes), there are serious ambiguities that lead to malformed words. In this paper, we are proposing two libraries: i) a normalizer for normalizing inconsistencies caused by a Unicode-based encoding scheme for Indic languages and ii) a grapheme parser for Abugida text. It deconstructs words into visually distinct orthographic syllables or complex graphemes and their constituents. Our proposed normalizer is a more efficient and effective tool than the previously used IndicNLP normalizer. Moreover, our parser and normalizer are also suitable tools for general Abugida text processing as they performed well in our robust word-based and NLP experiments. We report the pipeline for the scripts of 7 languages in this work and develop the framework for the integration of more scripts.",
}