Digitala Vetenskapliga Arkivet

Change search
Refine search result
1 - 33 of 33
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdulmumin, Idris
    et al.
    Ahmadu Bello University, Zaria, Nigeria; HausaNLP.
    Beukman, Michael
    University of the Witwatersrand, South Africa.
    Alabi, Jesujoba O.
    Saarland University, Germany.
    Emezue, Chris
    TUM, Germany; Mila - Quebec AI Institute.
    Asiko, Everlyn
    University of Cape Town, South Africa; African Institute for Mathematical Sciences.
    Adewumi, Oluwatosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Muhammad, Shamsuddeen Hassan
    HausaNLP; LIAAD-INESC TEC, Porto, Portugal.
    Adeyemi, Mofetoluwa
    Uppsala University, Sweden.
    Yousuf, Oreen
    Uppsala University, Sweden.
    Singh, Sahib
    Ford Motor Company.
    Gwadabe, Tajuddeen Rabiu
    HausaNLP; University of Chinese Academy of Sciences, China.
    Separating Grains from the Chaff: Using Data Filtering to Improve Multilingual Translation for Low-Resourced African Languages2022In: Proceedings of the Seventh Conference on Machine Translation (WMT) / [ed] Philipp Koehn, Loïc Barrault, Ondřej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Tom Kocmi, André Martins, Makoto Morishita, Christof Monz, Masaaki Nagata, Toshiaki Nakazawa, Matteo Negri, Aurélie Névéol, Mariana Neves, Martin Popel, Marco Turchi, Marcos Zampieri, Association for Computational Linguistics , 2022, p. 1001-1014Conference paper (Refereed)
    Abstract [en]

    We participated in the WMT 2022 Large-Scale Machine Translation Evaluation for the African Languages Shared Task. This work de-scribes our approach, which is based on filtering the given noisy data using a sentence-pair classifier that was built by fine-tuning a pre-trained language model. To train the classifier, we obtain positive samples (i.e. high-quality parallel sentences) from a gold-standard curated dataset and extract negative samples (i.e.low-quality parallel sentences) from automatically aligned parallel data by choosing sentences with low alignment scores. Our final machine translation model was then trained on filtered data, instead of the entire noisy dataset. We empirically validate our approach by evaluating on two common datasets and show that data filtering generally improves overall translation quality, in some cases even significantly.

  • 2.
    Abid, Nosheen
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Noman, Md Kislu
    Centre for AI and ML, School of Science, Edith Cowan University, Joondalup, WA, Australia.
    Kovács, György
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Islam, Syed Mohammed Shamsul
    Centre for AI and ML, School of Science, Edith Cowan University, Joondalup, WA, Australia.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. EISLAB Machine Learning, Luleå University of Technology, Luleå, Sweden.
    Lavery, Paul
    Centre for Marine Ecosystems Research, School of Sciences, Edith Cowan University, Joondalup, WA, Australia; Centro de Estudios Avanzados de Blanes, Consejo Superior de Investigaciones Cient´ ıficas, Blanes, Spain.
    Shafait, Faisal
    Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Islamabad, Pakistan.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Seagrass classification using unsupervised curriculum learning (UCL)2024In: Ecological Informatics, ISSN 1574-9541, E-ISSN 1878-0512, Vol. 83, article id 102804Article in journal (Refereed)
    Abstract [en]

    Seagrass ecosystems are pivotal in marine environments, serving as crucial habitats for diverse marine species and contributing significantly to carbon sequestration. Accurate classification of seagrass species from underwater images is imperative for monitoring and preserving these ecosystems. This paper introduces Unsupervised Curriculum Learning (UCL) to seagrass classification using the DeepSeagrass dataset. UCL progressively learns from simpler to more complex examples, enhancing the model's ability to discern seagrass features in a curriculum-driven manner. Experiments employing state-of-the-art deep learning architectures, convolutional neural networks (CNNs), show that UCL achieved overall 90.12 % precision and 89 % recall, which significantly improves classification accuracy and robustness, outperforming some traditional supervised learning approaches like SimCLR, and unsupervised approaches like Zero-shot CLIP. The methodology of UCL involves four main steps: high-dimensional feature extraction, pseudo-label generation through clustering, reliable sample selection, and fine-tuning the model. The iterative UCL framework refines CNN's learning of underwater images, demonstrating superior accuracy, generalization, and adaptability to unseen seagrass and background samples of undersea images. The findings presented in this paper contribute to the advancement of seagrass classification techniques, providing valuable insights into the conservation and management of marine ecosystems. The code and dataset are made publicly available and can be assessed here: https://github.com/nabid69/Unsupervised-Curriculum-Learning—UCL.

     

    Download full text (pdf)
    fulltext
  • 3.
    Adelani, David Ifeoluwa
    et al.
    Spoken Language Systems Group (LSV), Saarland University, Germany; Masakhane NLP.
    Abbott, Jade
    Retro Rabbit, South Africa; Masakhane NLP.
    Neubig, Graham
    Language Technologies Institute, Carnegie Mellon University, United States.
    D'souza, Daniel
    ProQuest, United States; Masakhane NLP.
    Kreutzer, Julia
    Google Research, Canada; Masakhane NLP.
    Lignos, Constantine
    Brandeis University, United States; Masakhane NLP.
    Palen-Michel, Chester
    Brandeis University, United States; Masakhane NLP.
    Buzaaba, Happy
    Graduate School of Systems and Information Engineering, University of Tsukuba, Japan; Masakhane NLP.
    Rijhwani, Shruti
    Language Technologies Institute, Carnegie Mellon University, United States.
    Ruder, Sebastian
    DeepMind, United Kingdom.
    Mayhew, Stephen
    Duolingo, United States.
    Abebe Azime, Israel
    African Institute for Mathematical Sciences (AIMS-AMMI), Ethiopia; Masakhane NLP.
    Muhammad, Shamsuddeen H.
    University of Porto, Nigeria; Bayero University, Kano, Nigeria.
    Emezue, Chris Chinenye
    Technical University of Munich, Germany; Masakhane NLP.
    Nakatuma-Nabende, Joyce
    Makerere University, Kampala, Uganda; Masakhane NLP.
    Ogayo, Perez
    African Leadership University, Rwanda; Masakhane NLP.
    Anuoluwapo, Aremu
    University of Lagos, Nigeria; Masakhane NLP.
    Gitau, Catherine
    Masakhane NLP.
    Mbaye, Derguene
    Masakhane NLP.
    Alabi, Jesujoba
    Max Planck Institute for Informatics, Germany; Masakhane NLP.
    Yimam, Seid Muhie
    LT Group, Universität Hamburg, Germany.
    Gwadabe, Tajuddeen Rabiu
    University of Chinese Academy of Science, China; Masakhane NLP.
    Ezeani, Ignatius
    Lancaster University, United Kingdom; Masakhane NLP.
    Niyongabo, Rubungo Andre
    University of Electronic Science and Technology of China, China; Masakhane NLP.
    Mukiibi, Jonathan
    Makerere University, Kampala, Uganda.
    Otiende, Verrah
    United States International University - Africa (USIU-A), Kenya; Masakhane NLP.
    Orife, Iroro
    Niger-Volta LTI; Masakhane NLP.
    David, Davis
    Masakhane NLP.
    Ngom, Samba
    Masakhane NLP.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Masakhane NLP.
    Rayson, Paul
    Lancaster University, United Kingdom.
    Adeyemi, Mofetoluwa
    Masakhane NLP.
    Muriuki, Gerald
    Makerere University, Kampala, Uganda.
    Anebi, Emmanuel
    Masakhane NLP.
    Chukwuneke, Chimaka
    Masakhane NLP.
    Odu, Nkiruka
    African University of Science and Technology, Abuja, Nigeria.
    Wairagala, Eric Peter
    Makerere University, Kampala, Uganda.
    Oyerinde, Samuel
    Masakhane NLP.
    Siro, Clemencia
    Masakhane NLP.
    Bateesa, Tobius Saul
    Makerere University, Kampala, Uganda.
    Oloyede, Temilola
    Masakhane NLP.
    Wambui, Yvonne
    Masakhane NLP.
    Akinode, Victor
    Masakhane NLP.
    Nabagereka, Deborah
    Makerere University, Kampala, Uganda.
    Katusiime, Maurice
    Makerere University, Kampala, Uganda.
    Awokoya, Ayodele
    University of Ibadan, Nigeria; Masakhane NLP.
    Mboup, Mouhamadane
    Masakhane NLP.
    Gebreyohannes, Dibora
    Masakhane NLP.
    Tilaye, Henok
    Masakhane NLP.
    Nwaike, Kelechi
    Masakhane NLP.
    Wolde, Degaga
    Masakhane NLP.
    Faye, Abdoulaye
    Masakhane NLP.
    Sibanda, Blessing
    Namibia University of Science and Technology, Namibia; Masakhane NLP.
    Ahia, Orevaoghene
    Instadeep, Nigeria; Masakhane NLP.
    Dossou, Bonaventure F. P.
    Jacobs University Bremen, Germany; Masakhane NLP.
    Ogueji, Kelechi
    University of Waterloo, Canada; Masakhane NLP.
    Diop, Thierno Ibrahima
    Masakhane NLP.
    Diallo, Abdoulaye
    Masakhane NLP.
    Akinfaderin, Adewale
    Masakhane NLP.
    Marengereke, Tendai
    Masakhane NLP.
    Osei, Salomey
    African Institute for Mathematical Sciences (AIMS-AMMI), Ethiopia; Masakhane NLP.
    MasakhaNER: Named Entity Recognition for African Languages2021In: Transactions of the Association for Computational Linguistics, E-ISSN 2307-387X, Vol. 9, p. 1116-1131Article in journal (Refereed)
    Abstract [en]

    We take a step towards addressing the under-representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of state-of-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP.

  • 4.
    Adelani, David Ifeoluwa
    et al.
    Masakhane NLP; Saarland University, Germany; University College London, UK.
    Neubig, Graham
    Carnegie Mellon University, USA.
    Ruder, Sebastian
    Google Research.
    Rijhwani, Shruti
    Carnegie Mellon University, USA.
    Beukman, Michael
    Masakhane NLP; University of the Witwatersrand, South Africa.
    Palen-Michel, Chester
    Masakhane NLP; Brandeis University, USA.
    Lignos, Constantine
    Masakhane NLP; Brandeis University, USA.
    Alabi, Jesujoba O.
    Masakhane NLP; Saarland University, Germany.
    Muhammad, Shamsuddeen H.
    Masakhane NLP; LIAAD-INESC TEC, Portugal.
    Nabende, Peter
    Masakhane NLP; Makerere University, Uganda.
    Bamba Dione, Cheikh M.
    Masakhane NLP; University of Bergen, Norway.
    Bukula, Andiswa
    SADiLaR, South Africa.
    Mabuya, Rooweither
    SADiLaR, South Africa.
    Dossou, Bonaventure F.P.
    Masakhane NLP; Mila Quebec AI Institute, Canada.
    Sibanda, Blessing
    Masakhane NLP.
    Buzaaba, Happy
    Masakhane NLP; RIKEN Center for AI Project, Japan.
    Mukiibi, Jonathan
    Masakhane NLP; Makerere University, Uganda.
    Kalipe, Godson
    Masakhane NLP.
    Mbaye, Derguene
    Masakhane NLP; Baamtu, Senegal.
    Taylor, Amelia
    Masakhane NLP; Malawi University of Business and Applied Science, Malawi.
    Kabore, Fatoumata
    Masakhane NLP; Uppsala University, Sweden.
    Emezue, Chris Chinenye
    Masakhane NLP; TU Munich, Germany.
    Aremu, Anuoluwapo
    Masakhane NLP.
    Ogayo, Perez
    Masakhane NLP; Carnegie Mellon University, USA.
    Gitau, Catherine
    Masakhane NLP.
    Munkoh-Buabeng, Edwin
    Masakhane NLP; TU Clausthal, Germany.
    Koagne, Victoire M.
    Masakhane NLP.
    Tapo, Allahsera Auguste
    Masakhane NLP; Rochester Institute of Technology, USA.
    Macucwa, Tebogo
    Masakhane NLP; University of Pretoria, South Africa.
    Marivate, Vukosi
    Masakhane NLP; University of Pretoria, South Africa.
    Mboning, Elvis
    Masakhane NLP.
    Gwadabe, Tajuddeen
    Masakhane NLP.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Masakhane NLP.
    Ahia, Orevaoghene
    Masakhane NLP; University of Washington, USA.
    Nakatumba-Nabende, Joyce
    Masakhane NLP; Makerere University, Uganda.
    Mokono, Neo L.
    Masakhane NLP; University of Pretoria, South Africa.
    Ezeani, Ignatius
    Masakhane NLP; Lancaster University, UK.
    Chukwuneke, Chiamaka
    Masakhane NLP; Lancaster University, UK.
    Adeyemi, Mofetoluwa
    Masakhane NLP; University of Waterloo, Canada.
    Hacheme, Gilles Q.
    Masakhane NLP; Ai4innov, France.
    Abdulmumin, Idris
    Masakhane NLP; Ahmadu Bello University, Nigeria.
    Ogundepo, Odunayo
    Masakhane NLP; University of Waterloo, Canada.
    Yousuf, Oreen
    Masakhane NLP; Uppsala University, Sweden.
    Ngoli, Tatiana Moteu
    Masakhane NLP.
    Klakow, Dietrich
    Saarland University, Germany.
    MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition2022In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (ACL) , 2022, p. 4488-4508Conference paper (Refereed)
    Abstract [en]

    African languages are spoken by over a billion people, but are underrepresented in NLP research and development. The challenges impeding progress include the limited availability of annotated datasets, as well as a lack of understanding of the settings where current methods are effective. In this paper, we make progress towards solutions for these challenges, focusing on the task of named entity recognition (NER). We create the largest human-annotated NER dataset for 20 African languages, and we study the behavior of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, demonstrating that the choice of source language significantly affects performance. We show that choosing the best transfer language improves zero-shot F1 scores by an average of 14 points across 20 languages compared to using English. Our results highlight the need for benchmark datasets and models that cover typologically-diverse African languages.

  • 5.
    Adewumi, Oluwatosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Vector Representations of Idioms in Data-Driven Chatbots for Robust Assistance2022Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis presents resources capable of enhancing solutions of some Natural Language Processing (NLP) tasks, demonstrates the learning of abstractions by deep models through cross-lingual transferability, and shows how deep learning models trained on idioms can enhance open-domain conversational systems. The challenges of open-domain conversational systems are many and include bland repetitive utterances, lack of utterance diversity, lack of training data for low-resource languages, shallow world-knowledge and non-empathetic responses, among others. These challenges contribute to the non-human-like utterances that open-domain conversational systems suffer from. They, hence,have motivated the active research in Natural Language Understanding (NLU) and Natural Language Generation (NLG), considering the very important role conversations (or dialogues) play in human lives. The methodology employed in this thesis involves an iterative set of scientific methods. First, it conducts a systematic literature review to identify the state-of-the-art (SoTA) and gaps, such as the challenges mentioned earlier, in current research. Subsequently, it follows the seven stages of the Machine Learning (ML) life-cycle, which are data gathering (or acquisition), data preparation, model selection, training, evaluation with hyperparameter tuning, prediction and model deployment. For data acquisition, relevant datasets are acquired or created, using benchmark datasets as references, and their data statements are included. Specific contributions of this thesis are the creation of the Swedish analogy test set for evaluating word embeddings and the Potential Idiomatic Expression (PIE)-English idioms corpus for training models in idiom identification and classification. In order to create a benchmark, this thesis performs human evaluation on the generated predictions of some SoTA ML models, including DialoGPT. As different individuals may not agree on all the predictions, the Inter-Annotator Agreement (IAA) is measured. A typical method for measuring IAA is Fleiss Kappa, however, it has a number of shortcomings, including high sensitivity to the number of categories being evaluated. Therefore, this thesis introduces the credibility unanimous score (CUS), which is more intuitive, easier to calculate and seemingly less sensitive to changes in the number of categories being evaluated. The results of human evaluation and comments from evaluators provide valuable feedback on the existing challenges within the models. These create the opportunity for addressing such challenges in future work. The experiments in this thesis test two hypothesis; 1) an open-domain conversational system that is idiom-aware generates more fitting responses to prompts containing idioms, and 2) deep monolingual models learn some abstractions that generalise across languages. To investigate the first hypothesis, this thesis trains English models on the PIE-English idioms corpus for classification and generation. For the second hypothesis, it explores cross-lingual transferability from English models to Swedish, Yorùbá, Swahili, Wolof, Hausa, Nigerian Pidgin English and Kinyarwanda. From the results, the thesis’ additional contributions mainly lie in 1) confirmation of the hypothesis that an open-domain conversational system that is idiom-aware generates more fitting responses to prompts containing idioms, 2) confirmation of the hypothesis that deep monolingual models learn some abstractions that generalise across languages, 3) introduction of CUS and its benefits, 4) insight into the energy-saving and time-saving benefits of more optimal embeddings from relatively smaller corpora, and 5) provision of public access to the model checkpoints that were developed from this work. We further discuss the ethical issues involved in developing robust, open-domain conversational systems. Parts of this thesis are already published in the form of peer-reviewed journal and conference articles.

    Download full text (pdf)
    fulltext
    Download full text (pdf)
    errata
  • 6.
    Adewumi, Oluwatosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Word Vector Representations using Shallow Neural Networks2021Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This work highlights some important factors for consideration when developing word vector representations and data-driven conversational systems. The neural network methods for creating word embeddings have gained more prominence than their older, count-based counterparts.However, there are still challenges, such as prolonged training time and the need for more data, especially with deep neural networks. Shallow neural networks with lesser depth appear to have the advantage of less complexity, however, they also face challenges, such as sub-optimal combination of hyper-parameters which produce sub-optimal models. This work, therefore, investigates the following research questions: "How importantly do hyper-parameters influence word embeddings’ performance?" and "What factors are important for developing ethical and robust conversational systems?" In answering the questions, various experiments were conducted using different datasets in different studies. The first study investigates, empirically, various hyper-parameter combinations for creating word vectors and their impact on a few natural language processing (NLP) downstream tasks: named entity recognition (NER) and sentiment analysis (SA). The study shows that optimal performance of embeddings for downstream \acrshort{nlp} tasks depends on the task at hand.It also shows that certain combinations give strong performance across the tasks chosen for the study. Furthermore, it shows that reasonably smaller corpora are sufficient or even produce better models in some cases and take less time to train and load. This is important, especially now that environmental considerations play prominent role in ethical research. Subsequent studies build on the findings of the first and explore the hyper-parameter combinations for Swedish and English embeddings for the downstream NER task. The second study presents the new Swedish analogy test set for evaluation of Swedish embeddings. Furthermore, it shows that character n-grams are useful for Swedish, a morphologically rich language. The third study shows that broad coverage of topics in a corpus appears to be important to produce better embeddings and that noise may be helpful in certain instances, though they are generally harmful. Hence, relatively smaller corpus can show better performance than a larger one, as demonstrated in the work with the smaller Swedish Wikipedia corpus against the Swedish Gigaword. The argument is made, in the final study (in answering the second question) from the point of view of the philosophy of science, that the near-elimination of the presence of unwanted bias in training data and the use of foralike the peer-review, conferences, and journals to provide the necessary avenues for criticism and feedback are instrumental for the development of ethical and robust conversational systems.

    Download full text (pdf)
    fulltext
  • 7.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Brännvall, Rickard
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. RISE Research Institutes of Sweden.
    Abid, Nosheen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Pahlavan, Maryam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sabah Sabry, Sana
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Småprat: DialoGPT for Natural Language Generation of Swedish Dialogue by Transfer Learning2022In: Proceedings of the Northern Lights Deep Learning Workshop 2022 / [ed] Sigurd Løkse, Benjamin Ricaud, Septentrio Academic Publishing , 2022, Vol. 3Conference paper (Refereed)
    Abstract [en]

    Building open-domain conversational systems (or chatbots) that produce convincing responses is a recognized challenge. Recent state-of-the-art (SoTA) transformer-based models for the generation of natural language dialogue have demonstrated impressive performance in simulating human-like, single-turn conversations in English.This work investigates, by an empirical study, the potential for transfer learning of such models to Swedish language. DialoGPT, an English language pre-trained model, is adapted by training on three different Swedish language conversational datasets obtained from publicly available sources: Reddit, Familjeliv and the GDC. Perplexity score (an automated intrinsic metric) and surveys by human evaluation were used to assess the performances of the fine-tuned models. We also compare the DialoGPT experiments with an attention-mechanism-based seq2seq baseline model, trained on the GDC dataset. The results indicate that the capacity for transfer learning can be exploited with considerable success. Human evaluators asked to score the simulated dialogues judged over 57% of the chatbot responses to be human-like for the model trained on the largest (Swedish) dataset. The work agrees with the hypothesis that deep monolingual models learn some abstractions which generalize across languages. We contribute the codes, datasets and model checkpoints and host the demos on the HuggingFace platform.

  • 8.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Conversational Systems in Machine Learning from the Point of View of the Philosophy of Science—Using Alime Chat and Related Studies2019In: Philosophies, ISSN 2409-9287, Vol. 4, no 3, article id 41Article in journal (Refereed)
    Abstract [en]

    This essay discusses current research efforts in conversational systems from the philosophy of science point of view and evaluates some conversational systems research activities from the standpoint of naturalism philosophical theory. Conversational systems or chatbots have advanced over the decades and now have become mainstream applications. They are software that users can communicate with, using natural language. Particular attention is given to the Alime Chat conversational system, already in industrial use, and the related research. The competitive nature of systems in production is a result of different researchers and developers trying to produce new conversational systems that can outperform previous or state-of-the-art systems. Different factors affect the quality of the conversational systems produced, and how one system is assessed as being better than another is a function of objectivity and of the relevant experimental results. This essay examines the research practices from, among others, Longino’s view on objectivity and Popper’s stand on falsification. Furthermore, the need for qualitative and large datasets is emphasized. This is in addition to the importance of the peer-review process in scientific publishing, as a means of developing, validating, or rejecting theories, claims, or methodologies in the research community. In conclusion, open data and open scientific discussion fora should become more prominent over the mere publication-focused trend.

  • 9.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Corpora Compared: The Case of the Swedish Gigaword & Wikipedia Corpora2020Conference paper (Refereed)
    Abstract [en]

    In this work, we show that the difference in performance of embeddings from differently sourced data for a given language can be due to other factors besides data size. Natural language processing (NLP) tasks usually perform better with embeddings from bigger corpora. However, broadness of covered domain and noise can play important roles. We evaluate embeddings based on two Swedish corpora: The Gigaword and Wikipedia, in analogy (intrinsic) tests and discover that the embeddings from the Wikipedia corpus generally outperform those from the Gigaword corpus, which is a bigger corpus. Downstream tests will be required to have a definite evaluation.

    Download full text (pdf)
    fulltext
  • 10.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Exploring Swedish & English fastText Embeddings2022In: Artificial Intelligence and Cognition 2022: Proceedings of the 8th International Workshop on Artificial Intelligence and Cognition / [ed] Hadi Banaee, Amy Loutfi, Alessandro Saffiotti, Antonio Lieto, 2022, Vol. 3400, p. 201-208Conference paper (Refereed)
    Abstract [en]

    In this paper, we show that embeddings from relatively smaller corpora sometimes outperform thosefrom larger corpora and we introduce a new Swedish analogy test set and make it publicly available.To achieve good performance in Natural Language Processing (NLP) downstream tasks, several factorsplay important roles: dataset size, the right hyper-parameters, and well-trained embeddings. We utilizethe fastText tool for our experiments. We evaluate both the Swedish and English embeddings that wecreated using intrinsic evaluation (including analogy & Spearman correlation) and compare them with2 common, publicly available embeddings. Our English continuous Bag-of-Words (CBoW)-negativesampling embedding shows better performance compared to the publicly available GoogleNews version.We also describe the relationship between NLP and cognitive science. We contribute the embeddings forresearch or other useful purposes by publicly releasing them.

    Download full text (pdf)
    fulltext
  • 11.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Exploring Swedish & English fastText Embeddings for NER with the TransformerManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper, our main contributions are that embeddings from relatively smaller corpora can outperform ones from far larger corpora and we present the new Swedish analogy test set. To achieve a good network performance in natural language processing (NLP) downstream tasks, several factors play important roles: dataset size, the right hyper-parameters, and well-trained embeddings. We show that, with the right set of hyper-parameters, good network performance can be reached even on smaller datasets. We evaluate the embeddings at the intrinsic level and extrinsic level, by deploying them on the Transformer in named entity recognition (NER) task and conduct significance tests. This is done for both Swedish and English. We obtain better performance in both languages on the downstream task with far smaller training data, compared to recently released, common crawl versions; and character n-grams appear useful for Swedish, a morphologically rich language.

  • 12.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Vector Representations of Idioms in Conversational Systems2022In: Sci, E-ISSN 2413-4155, Vol. 4, no 4, article id 37Article in journal (Refereed)
    Abstract [en]

    In this study, we demonstrate that an open-domain conversational system trained on idioms or figurative language generates more fitting responses to prompts containing idioms. Idioms are a part of everyday speech in many languages and across many cultures, but they pose a great challenge for many natural language processing (NLP) systems that involve tasks such as information retrieval (IR), machine translation (MT), and conversational artificial intelligence (AI). We utilized the Potential Idiomatic Expression (PIE)-English idiom corpus for the two tasks that we investigated: classification and conversation generation. We achieved a state-of-the-art (SoTA) result of a 98% macro F1 score on the classification task by using the SoTA T5 model. We experimented with three instances of the SoTA dialogue model—the Dialogue Generative Pre-trained Transformer (DialoGPT)—for conversation generation. Their performances were evaluated by using the automatic metric, perplexity, and a human evaluation. The results showed that the model trained on the idiom corpus generated more fitting responses to prompts containing idioms 71.9% of the time in comparison with a similar model that was not trained on the idiom corpus. We have contributed the model checkpoint/demo/code to the HuggingFace hub for public access.

  • 13.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Word2Vec: Optimal hyperparameters and their impact on natural language processing downstream tasks2022In: Open Computer Science, E-ISSN 2299-1093, Vol. 12, no 1, p. 134-141Article in journal (Refereed)
    Abstract [en]

    Word2Vec is a prominent model for natural language processing tasks. Similar inspiration is found in distributed embeddings (word-vectors) in recent state-of-the-art deep neural networks. However, wrong combination of hyperparameters can produce embeddings with poor quality. The objective of this work is to empirically show that Word2Vec optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the publicly released, original Word2Vec embedding. Both intrinsic and extrinsic (downstream) evaluations are carried out, including named entity recognition and sentiment analysis. Our main contributions include showing that the best model is usually task-specific, high analogy scores do not necessarily correlate positively with F1 scores, and performance is not dependent on data size alone. If ethical considerations to save time, energy, and the environment are made, then relatively smaller corpora may do just as well or even better in some cases. Increasing the dimension size of embeddings after a point leads to poor quality or performance. In addition, using a relatively small corpus, we obtain better WordSim scores, corresponding Spearman correlation, and better downstream performances (with significance tests) compared to the original model, which is trained on a 100 billion-word corpus.

  • 14.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream TasksManuscript (preprint) (Other academic)
    Abstract [en]

    Word2Vec is a prominent model for natural language processing (NLP) tasks. Similar nspiration is found in distributed embeddings for new state-of-the-art (SotA) deep neural networks.  However, wrong combination of hyper-parameters can produce poor quality vectors. The objective of this work is to empirically show optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the released, pre-trained original word2vec model. Both intrinsic and extrinsic (downstream) evaluations, including named entity recognition (NER) and sentiment analysis (SA) were carried out. The downstream tasks reveal that the best model is usually task-specific, high analogy scores don’t necessarily correlate positively with F1 scores and the same applies to focus on data alone. Increasing vector dimension size after a point leads to poor quality or performance. If ethical considerations to save time, energy and the environment are made, then reasonably smaller corpora may do just as well or even better in some cases. Besides, using a small corpus, we obtain better human-assigned WordSim scores, corresponding Spearman correlation and better downstream performances (with significance tests) compared to the original model, trained on 100 billion-word corpus.

  • 15.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Inner For-Loop for Speeding Up Blockchain Mining2020In: Open Computer Science, E-ISSN 2299-1093, Vol. 10, no 1, p. 42-47Article in journal (Refereed)
    Abstract [en]

    In this paper, the authors propose to increase the efficiency of blockchain mining by using a population-based approach. Blockchain relies on solving difficult mathematical problems as proof-of-work within a network before blocks are added to the chain. Brute force approach, advocated by some as the fastest algorithm for solving partial hash collisions and implemented in Bitcoin blockchain, implies exhaustive, sequential search. It involves incrementing the nonce (number) of the header by one, then taking a double SHA-256 hash at each instance and comparing it with a target value to ascertain if lower than that target. It excessively consumes both time and power. In this paper, the authors, therefore, suggest using an inner for-loop for the population-based approach. Comparison shows that it’s a slightly faster approach than brute force, with an average speed advantage of about 1.67% or 3,420 iterations per second and 73% of the time performing better. Also, we observed that the more the total particles deployed, the better the performance until a pivotal point. Furthermore, a recommendation on taming the excessive use of power by networks, like Bitcoin’s, by using penalty by consensus is suggested.

    Download full text (pdf)
    fulltext
  • 16.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sabry, Sana Sabah
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Abid, Nosheen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    T5 for Hate Speech, Augmented Data, and Ensemble2023In: Sci, E-ISSN 2413-4155, Vol. 5, no 4, article id 37Article in journal (Refereed)
    Abstract [en]

    We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what advantage methods, such as data augmentation and ensemble, may have on the best model, if any. We carry out six cross-task investigations. We achieve new SoTA results on two subtasks—macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020 dataset, surpassing previous SoTA scores of 51.52% and 26.52%, respectively. We achieve near-SoTA results on two others—macro F1 scores of 81.66% for subtask A of the OLID 2019 and 82.54% for subtask A of the HASOC 2021, in comparison to SoTA results of 82.9% and 83.05%, respectively. We perform error analysis and use two eXplainable Artificial Intelligence (XAI) algorithms (Integrated Gradient (IG) and SHapley Additive exPlanations (SHAP)) to reveal how two of the models (Bi-Directional Long Short-Term Memory Network (Bi-LSTM) and Text-to-Text-Transfer Transformer (T5)) make the predictions they do by using examples. Other contributions of this work are: (1) the introduction of a simple, novel mechanism for correcting Out-of-Class (OoC) predictions in T5, (2) a detailed description of the data augmentation methods, and (3) the revelation of the poor data annotations in the HASOC 2021 dataset by using several examples and XAI (buttressing the need for better quality control). We publicly release our model checkpoints and codes to foster transparency.

    Download full text (pdf)
    fulltext
  • 17.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Masakhane.
    Adeyemi, Mofetoluwa
    Masakhane.
    Anuoluwapo, Aremu
    Masakhane.
    Peters, Bukola
    CIS.
    Buzaaba, Happy
    Masakhane.
    Samuel, Oyerinde
    Masakhane.
    Rufai, Amina Mardiyyah
    Masakhane.
    Ajibade, Benjamin
    Masakhane.
    Gwadabe, Tajudeen
    Masakhane.
    Koulibaly Traore, Mory Moussou
    Masakhane.
    Ajayi, Tunde Oluwaseyi
    Masakhane.
    Muhammad, Shamsuddeen
    Baruwa, Ahmed
    Masakhane.
    Owoicho, Paul
    Masakhane.
    Ogunremi, Tolulope
    Masakhane.
    Ngigi, Phylis
    Jomo Kenyatta University of Agriculture and Technology.
    Ahia, Orevaoghene
    Masakhane.
    Nasir, Ruqayya
    Masakhane.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    AfriWOZ: Corpus for Exploiting Cross-Lingual Transfer for Dialogue Generation in Low-Resource, African Languages2023In: IJCNN 2023 - International Joint Conference on Neural Networks, Conference Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2023Conference paper (Refereed)
    Abstract [en]

    Dialogue generation is an important NLP task fraught with many challenges. The challenges become more daunting for low-resource African languages. To enable the creation of dialogue agents for African languages, we contribute the first high-quality dialogue datasets for 6 African languages: Swahili, Wolof, Hausa, Nigerian Pidgin English, Kinyarwanda & Yorùbá. There are a total of 9,000 turns, each language having 1,500 turns, which we translate from a portion of the English multi-domain MultiWOZ dataset. Subsequently, we benchmark by investigating & analyzing the effectiveness of modelling through transfer learning by utilziing state-of-the-art (SoTA) deep monolingual models: DialoGPT and BlenderBot. We compare the models with a simple seq2seq baseline using perplexity. Besides this, we conduct human evaluation of single-turn conversations by using majority votes and measure inter-annotator agreement (IAA). We find that the hypothesis that deep monolingual models learn some abstractions that generalize across languages holds. We observe human-like conversations, to different degrees, in 5 out of the 6 languages. The language with the most transferable properties is the Nigerian Pidgin English, with a human-likeness score of 78.1%, of which 34.4% are unanimous. We freely provide the datasets and host the model checkpoints/demos on the HuggingFace hub for public access.

  • 18.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    ML_LTU at SemEval-2022 Task 4: T5 Towards Identifying Patronizingand Condescending Language2022In: Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) / [ed] Guy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan, Association for Computational Linguistics , 2022, p. 473-478Conference paper (Refereed)
    Abstract [en]

    This paper describes the system used by the Machine Learning Group of LTU in subtask 1 of the SemEval-2022 Task 4: Patronizing and Condescending Language (PCL) Detection. Our system consists of finetuning a pretrained text-to-text transfer transformer (T5) and innovatively reducing its out-of-class predictions. The main contributions of this paper are 1) the description of the implementation details of the T5 model we used, 2) analysis of the successes & struggles of the model in this task, and 3) ablation studies beyond the official submission to ascertain the relative importance of data split. Our model achieves an F1 score of 0.5452 on the official test set.

  • 19.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Habib, Nudrat
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Barney, Elisa
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Instruction Makes a Difference2024In: Document Analysis Systems: 16th IAPR International Workshop, DAS 2024, Athens, Greece, August 30–31, 2024, Proceedings / [ed] Giorgos Sfikas; George Retsinas, Springer Science and Business Media Deutschland GmbH , 2024, p. 71-88Conference paper (Refereed)
    Abstract [en]

    We introduce the Instruction Document Visual Question Answering (iDocVQA) dataset and the Large Language Document (LLaDoc) model, for training Language-Vision (LV) models for document analysis and predictions on document images, respectively. Usually, deep neural networks for the DocVQA task are trained on datasets lacking instructions. We show that using instruction-following datasets improves performance. We compare performance across document-related datasets using the recent state-of-the-art (SotA) Large Language and Vision Assistant (LLaVA)1.5 as the base model. We also evaluate the performance of the derived models for object hallucination using the Polling-based Object Probing Evaluation (POPE) dataset. The results show that instruction-tuning performance ranges from 11x to 32x of zero-shot performance and from 0.1% to 4.2% over non-instruction (traditional task) finetuning. Despite the gains, these still fall short of human performance (94.36%), implying there’s much room for improvement.

  • 20.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    State-of-the-Art in Open-Domain Conversational AI: A Survey2022In: Information, E-ISSN 2078-2489, Vol. 13, no 6, article id 298Article, review/survey (Refereed)
    Abstract [en]

    We survey SoTA open-domain conversational AI models with the objective of presenting the prevailing challenges that still exist to spur future research. In addition, we provide statistics on the gender of conversational AI in order to guide the ethics discussion surrounding the issue. Open-domain conversational AI models are known to have several challenges, including bland, repetitive responses and performance degradation when prompted with figurative language, among others. First, we provide some background by discussing some topics of interest in conversational AI. We then discuss the method applied to the two investigations carried out that make up this study. The first investigation involves a search for recent SoTA open-domain conversational AI models, while the second involves the search for 100 conversational AI to assess their gender. Results of the survey show that progress has been made with recent SoTA conversational AI, but there are still persistent challenges that need to be solved, and the female gender is more common than the male for conversational AI. One main takeaway is that hybrid models of conversational AI offer more advantages than any single architecture. The key contributions of this survey are (1) the identification of prevailing challenges in SoTA open-domain conversational AI, (2) the rarely held discussion on open-domain conversational AI for low-resource languages, and (3) the discussion about the ethics surrounding the gender of conversational AI.

  • 21.
    Adewumi, Tosin P.
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    The Challenge of Diacritics in Yorùbá Embeddings2020In: ML4D 2020 Proceedings / [ed] Tejumade Afonja; Konstantin Klemmer; Aya Salama; Paula Rodriguez Diaz; Niveditha Kalavakonda; Oluwafemi Azeez, Neural Information Processing Systems Foundation , 2020, article id 2011.07605Conference paper (Refereed)
    Abstract [en]

    The major contributions of this work include the empirical establishment of a better performance for Yoruba embeddings from undiacritized (normalized) dataset and provision of new analogy sets for evaluation.The Yoruba language, being a tonal language, utilizes diacritics (tonal marks) in written form. We show that this affects embedding performance by creating embeddings from exactly the same Wikipedia dataset but with the second one normalized to be undiacritized. We further compare average intrinsic performance with two other work (using analogy test set & WordSim) and we obtain the best performance in WordSim and corresponding Spearman correlation.

    Download full text (pdf)
    fulltext
  • 22.
    Adewumi, Tosin P.
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Vector Representations of Idioms in Chatbots2020In: Proceedings: SAIS Workshop 2020, Chalmers University of Technology , 2020Conference paper (Refereed)
    Abstract [en]

    Open-domain chatbots have advanced but still have many gaps. My PhD aims to solve a few of those gaps by creating vector representations of idioms (figures of speech) that will be beneficial to chatbots and natural language processing (NLP), generally. In the process, new, optimal fastText embeddings in Swedish and English have been created and the first Swedish analogy test set, larger than the Google original, for intrinsic evaluation of Swedish embeddings has also been produced. Major milestones have been attained and others are soon to follow. The deliverables of this project will give NLP researchers the opportunity to measure the quality of Swedish embeddings easily and advance state-of-the-art (SotA) in NLP.

    Download full text (pdf)
    fulltext
  • 23.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Södergren, Isabella
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sabry, Sana Sabah
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bipol: Multi-axes Evaluation of Bias with Explainability in BenchmarkDatasets2023In: Proceedings of Recent Advances in Natural Language Processing / [ed] Galia Angelova, Maria Kunilovskaya and Ruslan Mitkov, Incoma Ltd. , 2023, p. 1-10Conference paper (Refereed)
    Abstract [en]

    We investigate five English NLP benchmark datasets (on the superGLUE leaderboard) and two Swedish datasets for bias, along multiple axes. The datasets are the following: Boolean Question (Boolq), CommitmentBank (CB), Winograd Schema Challenge (WSC), Winogender diagnostic (AXg), Recognising Textual Entailment (RTE), Swedish CB, and SWEDN. Bias can be harmful and it is known to be common in data, which ML models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to estimate and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large Swedish bias-labeled dataset (of 2 million samples), translated from the English version and train the SotA mT5 model on it. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We make the codes, model, and new dataset publicly available.

  • 24.
    Adewumi, Tosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Vadoodi, Roshanak
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Geosciences and Environmental Engineering.
    Tripathy, Aparajita
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Nikolaidou, Konstantina
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Potential Idiomatic Expression (PIE)-English: Corpus for Classes of Idioms2022In: Proceedings of the 13th Language Resources and Evaluation Conference / [ed] Nicoletta Calzolari; Frédéric Béchet; Philippe Blache; Khalid Choukri; Christopher Cieri; Thierry Declerck; Sara Goggi; Hitoshi Isahara; Bente Maegaard; Joseph Mariani; Hélène Mazo; Jan Odijk; Stelios Piperidis, European Language Resources Association (ELRA) , 2022, p. 689-696Conference paper (Refereed)
    Abstract [en]

    We present a fairly large, Potential Idiomatic Expression (PIE) dataset for Natural Language Processing (NLP) in English. The challenges with NLP systems with regards to tasks such as Machine Translation (MT), word sense disambiguation (WSD) and information retrieval make it imperative to have a labelled idioms dataset with classes such as it is in this work. To the best of the authors’ knowledge, this is the first idioms corpus with classes of idioms beyond the literal and the general idioms classification. Inparticular, the following classes are labelled in the dataset: metaphor, simile, euphemism, parallelism, personification, oxymoron, paradox, hyperbole, irony and literal. We obtain an overall inter-annotator agreement (IAA) score, between two independent annotators, of 88.89%. Many past efforts have been limited in the corpus size and classes of samples but this dataset contains over 20,100 samples with almost 1,200 cases of idioms (with their meanings) from 10 classes (or senses). The corpus may also be extended by researchers to meet specific needs. The corpus has part of speech (PoS) tagging from the NLTK library. Classification experiments performed on the corpus to obtain a baseline and comparison among three common models, including the state-of-the-art (SoTA) BERT model, give good results. We also make publicly available the corpus and the relevant codes for working with it for NLP tasks.

  • 25.
    Al-Azzawi, Sana
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Kovács, György
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Nilsson, Filip
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    NLP-LTU at SemEval-2023 Task 10: The Impact of Data Augmentation and Semi-Supervised Learning Techniques on Text Classification Performance on an Imbalanced Dataset2023In: 17th International Workshop on Semantic Evaluation, SemEval 2023: Proceedings of the Workshop, Association for Computational Linguistics, 2023, p. 1421-1427Conference paper (Refereed)
  • 26.
    Alkhaled, Lama
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Adewumi, Oluwatosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sabry, Sana Sabah
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bipol: A novel multi-axes bias evaluation metric with explainability for NLP2023In: Natural Language Processing Journal, ISSN 2949-7191, Vol. 4, article id 100030Article in journal (Refereed)
    Abstract [en]

    We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpus-level evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to classify bias using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2) and the WinoBias dataset. As additional contribution, we created a large English dataset (with almost 2 million labeled samples) for training models in bias classification and make it publicly available. We also make public our codes.

    Download full text (pdf)
    fulltext
  • 27.
    Azime, Israel Abebe
    et al.
    Saarland University, Germany.
    Al-Azzawi, Sana Sabah
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Tonja, Atnafu Lambebo
    Instituto Politécnico Nacional, Mexico.
    Shode, Iyanuoluwa
    Montclair State University, USA.
    Alabi, Jesujoba
    Saarland University, Germany.
    Awokoya, Ayodele
    University of Ibadan, Nigeria.
    Oduwole, Mardiyyah
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Fanijo, Samuel
    Iowa State University, USA.
    Oyinkansola, Awosan
    Yousuf, Oreen
    Masakhane-Afrisenti at SemEval-2023 Task 12: Sentiment Analysis using Afro-centric Language Models and Adapters for Low-resource African Languages2023In: The 17th International Workshop on Semantic Evaluation (SemEval-2023): Proceedings of the Workshop / [ed] Atul Kr. Ojha; A. Seza Dogruoz; Giovanni Da San Martino; Harish Tayyar Madabushi; Ritesh Kumar; Elisa Sartori, Association for Computational Linguistics , 2023, p. 1311-1316Conference paper (Refereed)
  • 28.
    Gehrmann, Sebastian
    et al.
    Google Research.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Masakhane, Africa.
    Aggarwal, Karmanya
    IIIT Delhi, India.
    Ammanamanchi, Pawan Sasanka
    IIIT Hyderabad, India.
    Anuoluwapo, Aremu
    Masakhane, Africa; University of Lagos, Nigeria.
    Bosselut, Antoine
    Stanford University, USA.
    Chandu, Khyathi Raghavi
    Carnegie Mellon University, USA.
    Clinciu, Miruna
    Edinburgh Centre for Robotics, UK; Heriot-Watt University, UK; University of Edinburgh.
    Das, Dipanjan
    Google Research.
    Dhole, Kaustubh D.
    Amelia R&D, New York, USA.
    Du, Wanyu
    University of Virginia, USA.
    Durmus, Esin
    Cornell University, USA.
    Dušek, Ondřej
    Charles University, Prauge, Czech Republic.
    Emezue, Chris
    Makahane, Africa; Technical University of Munich, Munich, Germany.
    Gangal, Varun
    Carnegie Mellon University, USA.
    Garbacea, Cristina
    University of Michigan Ann Arbor, USA.
    Hashimoto, Tatsunori
    Stanford University, USA.
    Hou, Yufang
    IBM Research.
    Jernite, Yacine
    Hugging Face.
    Jhamtani, Harsh
    Carnegie Mellon University, USA.
    Ji, Yangfeng
    University of Virginia, USA.
    Jolly, Shailza
    DFKI, Germany; Technical University of Kaiserslautern, Germany.
    Kale, Mihir
    Google Research.
    Kumar, Dhruv
    University of Waterloo, Canada.
    Ladhak, Faisal
    Columbia University, USA.
    Madaan, Aman
    Carnegie Mellon University, USA.
    Maddela, Mounica
    Georgia Tech, USA.
    Mahajan, Khyati
    University of North Carolina, Charlotte, USA.
    Mahamood, Saad
    Trivago.
    Majumder, Bodhisattwa Prasad
    University of California San Diego, USA.
    Martins, Pedro Henrique
    Instituto de Telecomunicações, Portugal.
    McMillan-Major, Angelina
    University of Washington, USA.
    Mille, Simon
    Pompeu Fabra University, Spain.
    van Miltenburg, Emiel
    Tilburg University, Netherlands.
    Nadeem, Moin
    Massachusetts Institute oof Technology, USA.
    Narayan, Shashi
    Google Research.
    Nikolaev, Vitaly
    Google Research.
    Niyongabo, Rubungo Andre
    Masakhane, Africa; University of Electronic Science and Technology of China, China.
    Osei, Salomey
    Kwame Nkrumah University of Science and Technology, Ghana; Masakhane, Africa.
    Parikh, Ankur
    Google Research.
    Perez-Beltrachini, Laura
    University of Edinburgh, UK.
    Ramesh Rao, Niranjan
    National Institute of Technology Karnataka, India.
    Raunak, Vikas
    Microsoft.
    Rodriguez, Juan Diego
    University of Texas at Austin, USA.
    Santhanam, Sashank
    University of North Carolina, Charlotte, USA.
    Sedoc, João
    New York University, USA.
    Sellam, Thibault
    Google Research.
    Shaikh, Samira
    University of North Carolina, Charlotte, USA.
    Shimorina, Anastasia
    Université de Lorraine, France.
    Sobrevilla Cabezudo, Marco Antonio
    University São Paulo, Brazil.
    Strobelt, Hendrik
    IBM Research.
    Subramani, Nishant
    Intelligent Systems Lab, Intel; Masakhane, Africa.
    Xu, Wei
    Georgia Tech, USA.
    Yang, Diyi
    Georgia Tech, USA.
    Yerukola, Akhila
    Samsung Research.
    Zhou, Jiawei
    Harvard University, USA.
    The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics2021In: The 1st Workshop on Natural Language Generation, Evaluation, and Metrics: Proceedings of the Workshop, Association for Computational Linguistics, 2021, p. 96-120, article id 2021.gem-1.10Conference paper (Refereed)
    Abstract [en]

    We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for the 2021 shared task at the associated GEM Workshop.

  • 29.
    Gehrmann, Sebastian
    et al.
    Google Research.
    Bhattacharjee, Abhik
    Bangladesh University of Engineering and Technology, Bangladesh.
    Mahendiran, Abinaya
    Mphasis NEXT Labs.
    Wang, Alex
    New York University, USA.
    Papangelis, Alexandros
    Amazon Alexa AI.
    Madaan, Aman
    Carnegie Mellon University, USA.
    McMillan-Major, Angelina
    Hugging Face.
    Shvets, Anna
    Fablab in Paris by Inetum, France.
    Upadhyay, Ashish
    Robert Gordon University, Scotland.
    Bohnet, Bernd
    Google Research.
    Yao, Bingsheng
    Rensselaer Polytechnic Institute, USA.
    Wilie, Bryan
    The Hong Kong University of Science and Technology, Hong Kong.
    Bhagavatula, Chandra
    Allen Institute for AI, USA.
    You, Chaobin
    Tianjin University, China.
    Thomson, Craig
    University of Aberdeen, Scotland.
    Garbacea, Cristina
    University of Michigan, USA.
    Wang, Dakuo
    MIT-IBM Watson AI Lab, USA; Northeastern University.
    Deutsch, Daniel
    University of Pennsylvania, USA.
    Xiong, Deyi
    Tianjin University, China.
    Jin, Di
    Amazon Alexa AI.
    Gkatzia, Dimitra
    Edinburgh Napier University, Scotland.
    Radev, Dragomir
    Yale University, USA.
    Clark, Elizabeth
    Google Research.
    Durmus, Esin
    Stanford University, USA.
    Ladhak, Faisal
    Columbia University, USA.
    Ginter, Filip
    University of Turku, Finland.
    Winata, Genta Indra
    The Hong Kong University of Science and Technology, Hong Kong.
    Strobelt, Hendrik
    IBM Research, USA; MIT-IBM Watson AI Lab, USA.
    Hayashi, Hiroaki
    Carnegie Mellon University, USA; Salesforce Research, USA.
    Novikova, Jekaterina
    Winterlight Labs, Canada.
    Kanerva, Jenna
    University of Turku, Finland.
    Chim, Jenny
    Queen Mary University of London, UK.
    Zhou, Jiawei
    Harvard University, USA.
    Clive, Jordan
    Chattermill, UK.
    Maynez, Joshua
    Google Research.
    Sedoc, João
    New York University, USA.
    Juraska, Juraj
    University of California, Santa Cruz, USA.
    Dhole, Kaustubh
    Emory University, USA.
    Chandu, Khyathi Raghavi
    Meta AI.
    Perez-Beltrachini, Laura
    University of Edinburgh, Scotland.
    Ribeiro, Leonardo F.R.
    Technical University of Darmstadt, Germany.
    Tunstall, Lewis
    Hugging Face.
    Zhang, Li
    University of Pennsylvania, USA.
    Pushkarna, Mahima
    Google Research.
    Creutz, Mathias
    University of Helsinki, Finland.
    White, Michael
    The Ohio State University, USA.
    Kale, Mihir Sanjay
    Google Research.
    Eddine, Moussa Kamal
    École Polytechnique, France.
    Daheim, Nico
    RWTH Aachen University, Germany.
    Subramani, Nishant
    Allen Institute for AI, USA; Masakhane.
    Dusek, Ondrej
    Charles University, Czech Republic.
    Liang, Paul Pu
    Carnegie Mellon University, USA.
    Ammanamanchi, Pawan Sasanka
    IIIT Hyderabad, India.
    Zhu, Qi
    Tsinghua University, China.
    Puduppully, Ratish
    University of Edinburgh, Scotland.
    Kriz, Reno
    Johns Hopkins University, USA.
    Shahriyar, Rifat
    Bangladesh University of Engineering and Technology, Bangladesh.
    Cardenas, Ronald
    University of Edinburgh, Scotland.
    Mahamood, Saad
    trivago N.V..
    Osei, Salomey
    Masakhane.
    Cahyawijaya, Samuel
    HKUST.
    Štajner, Sanja
    Pompeu Fabra University, Spain.
    Montella, Sebastien
    Orange Labs.
    Jolly, Shailza
    TU Kaiserslautern, Germany.
    Mille, Simon
    Pompeu Fabra University, Spain.
    Hasan, Tahmid
    Bangladesh University of Engineering and Technology, Bangladesh.
    Shen, Tianhao
    Tianjin University, China.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Raunak, Vikas
    Microsoft.
    Raheja, Vipul
    Grammarly.
    Nikolaev, Vitaly
    Google Research.
    Tsai, Vivian
    Google Research.
    Jernite, Yacine
    Hugging Face.
    Xu, Ying
    University of Michigan, USA.
    Sang, Yisi
    Syracuse University, USA.
    Liu, Yixin
    Yale University, USA.
    Hou, Yufang
    IBM Research.
    GEMv2: Multilingual NLG Benchmarking in a Single Line of Code2022In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Association for Computational Linguistics (ACL) , 2022, p. 266-281Conference paper (Refereed)
    Abstract [en]

    Evaluations in machine learning rarely use the latest metrics, datasets, or human evaluation in favor of remaining compatible with prior work. The compatibility, often facilitated through leaderboards, thus leads to outdated but standardized evaluation practices. We pose that the standardization is taking place in the wrong spot. Evaluation infrastructure should enable researchers to use the latest methods and what should be standardized instead is how to incorporate these new evaluation advances.We introduce GEMv2, the new version of the Generation, Evaluation, and Metrics Benchmark which uses a modular infrastructure for dataset, model, and metric developers to benefit from each other’s work. GEMv2 supports 40 documented datasets in 51 languages, ongoing online evaluation for all datasets, and our interactive tools make it easier to add new datasets to the living benchmark.

  • 30.
    Javed, Saleha
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Adewumi, Oluwatosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Understanding the Role of Objectivity in Machine Learning and Research Evaluation2021In: Philosophies, ISSN 2409-9287, Vol. 6, no 1, article id 22Article in journal (Refereed)
    Abstract [en]

    This article makes the case for more objectivity in Machine Learning (ML) research. Any research work that claims to hold benefits has to be scrutinized based on many parameters, such as the methodology employed, ethical considerations and its theoretical or technical contribution. We approach this discussion from a Naturalist philosophical outlook. Although every analysis may be subjective, it is important for the research community to keep vetting the research for continuous growth and to produce even better work. We suggest standardizing some of the steps in ML research in an objective way and being aware of various biases threatening objectivity. The ideal of objectivity keeps research rational since objectivity requires beliefs to be based on facts. We discuss some of the current challenges, the role of objectivity in the two elements (product and process) that are up for consideration in ML and make recommendations to support the research community.

  • 31.
    Kovács, György
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Saini, Rajkumar
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Faridghasemnia, Mohamadreza
    Örebro Universitet / Örebro, Sweden-70182.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Alonso, Pedro
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Rakesh, Sumit
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Pedagogical Principles in the Online Teaching of NLP: A Retrospection2021In: Teaching NLP: Proceedings of the Fifth Workshop / [ed] David Jurgens; Varada Kolhatkar; Lucy Li; Margot Mieskes; Ted Pedersen, Association for Computational Linguistics (ACL) , 2021, p. 1-12Conference paper (Refereed)
    Abstract [en]

    The ongoing COVID-19 pandemic has brought online education to the forefront of pedagogical discussions. To make this increased interest sustainable in a post-pandemic era, online courses must be built on strong pedagogical foundations. With a long history of pedagogic research, there are many principles, frameworks, and models available to help teachers in doing so. These models cover different teaching perspectives, such as constructive alignment, feedback, and the learning environment. In this paper, we discuss how we designed and implemented our online Natural Language Processing (NLP) course following constructive alignment and adhering to the pedagogical principles of LTU. By examining our course and analyzing student evaluation forms, we show that we have met our goal and successfully delivered the course. Furthermore, we discuss the additional benefits resulting from the current mode of delivery, including the increased reusability of course content and increased potential for collaboration between universities. Lastly, we also discuss where we can and will further improve the current course design.

  • 32.
    Sabry, Sana Sabah
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Abid, Nosheen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Kovács, György
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    HaT5: Hate Language Identification using Text-to-Text Transfer Transformer2022In: 2022 International Joint Conference on Neural Networks (IJCNN): Conference Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 2022Conference paper (Refereed)
    Abstract [en]

    We investigate the performance of a state-of-the-art (SoTA) architecture T5 (available on the SuperGLUE) and compare it with 3 other previous SoTA architectures across 5 different tasks from 2 relatively diverse datasets. The datasets are diverse in terms of the number and types of tasks they have. To improve performance, we augment the training data by using a new autoregressive conversational AI model checkpoint. We achieve near-SoTA results on a couple of the tasks - macro F1 scores of 81.66% for task A of the OLID 2019 dataset and 82.54% for task A of the hate speech and offensive content (HASOC) 2021 dataset, where SoTA are 82.9% and 83.05%, respectively. We perform error analysis and explain why one of the models (Bi-LSTM) makes the predictions it does by using a publicly available algorithm: Integrated Gradient (IG). This is because explainable artificial intelligence (XAI) is essential for earning the trust of users. The main contributions of this work are the implementation method of T5, which is discussed; the data augmentation, which brought performance improvements; and the revelation on the shortcomings of the HASOC 2021 dataset. The revelation shows the difficulties of poor data annotation by using a small set of examples where the T5 model made the correct predictions, even when the ground truth of the test set were incorrect (in our opinion). We also provide our model checkpoints on the HuggingFace hub1. https://huggingface.co/sana-ngu/HaT5_augmentation https://huggingface.co/sana-ngu/HaT5.

  • 33.
    Wang, Jiayi
    et al.
    University College London, UK.
    Adelani, David Ifeoluwa
    University College London, UK; Masakhane NLP.
    Agrawal, Sweta
    University of Maryland, USA.
    Masiak, Marek
    University College London, UK.
    Rei, Ricardo
    Unbabel; Instituto Superior Técnico; INESC-ID.
    Briakou, Eleftheria
    University of Maryland, USA.
    Carpuat, Marine
    University of Maryland, USA.
    He, Xuanli
    University College London, UK.
    Bourhim, Sofia
    ENSIAS, Morocco.
    Bukula, Andiswa
    SADiLaR, South Africa.
    Mohamed, Muhidin
    Aston University, UK.
    Olatoye, Temitayo
    University of Eastern Finland, Finland.
    Adewumi, Tosin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Mwase, Christine
    Fudan University, China.
    Kimotho, Wangui
    Masakhane NLP.
    Yuehgoh, Foutse
    Conservatoire National des Arts et Métiers, France.
    Aremu, Anuoluwapo
    Masakhane NLP.
    Ojo, Jessica
    Masakhane NLP; Lelapa AI, South Africa.
    Muhammad, Shamsuddeen Hassan
    Masakhane NLP; Imperial College London, UK; HausaNLP.
    Osei, Salomey
    Masakhane NLP; University of Deusto, Spain.
    Omotayo, Abdul-Hakeem
    Masakhane NLP; University of California, USA.
    Chukwuneke, Chiamaka
    Masakhane NLP; Lancaster University, UK.
    Ogayo, Perez
    Masakhane NLP.
    Hourrane, Oumaima
    Masakhane NLP.
    Anigri, Salma El
    Mohammed V University, Morocco.
    Ndolela, Lolwethu
    Masakhane NLP.
    Mangwana, Thabiso
    Masakhane NLP.
    Mohamed, Shafie Abdi
    Jamhuriya University Of Science and Technology, Somalia.
    Hassan, Ayinde
    LAUTECH, Nigeria.
    Awoyomi, Oluwabusayo Olufunke
    The College of Saint Rose, USA.
    Alkhaled, Lama
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Al-Azzawi, Sana
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Etori, Naome A.
    University of Minnesota -Twin Cities, USA.
    Ochieng, Millicent
    Microsoft Africa Research Institute.
    Siro, Clemencia
    University of Amsterdam, Netherlands.
    Njoroge, Samuel
    The Technical University of Kenya.
    Muchiri, Eric
    Masakhane NLP.
    Kimotho, Wangari
    AIMS, Cameroon.
    Momo, Lyse Naomi Wamba
    KU Leuven, Belgium.
    Abolade, Daud
    Masakhane NLP.
    Ajao, Simbiat
    Masakhane NLP.
    Shode, Iyanuoluwa
    Masakhane NLP.
    Macharm, Ricky
    Masakhane NLP.
    Iro, Ruqayya Nasir
    HausaNLP.
    Abdullahi, Saheed S.
    SIAT-CAS, China; Kaduna State University, Nigeria.
    Moore, Stephen E.
    University of Cape Coast, Ghana; Ghana NLP.
    Opoku, Bernard
    Masakhane NLP; Kwame Nkrumah University of Science and Technology, Ghana.
    Akinjobi, Zainab
    Masakhane NLP; New Mexico State University, USA.
    Afolabi, Abeeb
    Masakhane NLP.
    Obiefuna, Nnaemeka
    Masakhane NLP.
    Ogbu, Onyekachi Raphael
    Masakhane NLP.
    Brian, Sam
    Masakhane NLP.
    Otiende, Verrah Akinyi
    USIU-Africa.
    Mbonu, Chinedu Emmanuel
    UNIZIK, Nigeria.
    Sari, Sakayo Toadoum
    AIMS, Senegal.
    Lu, Yao
    University College London, UK.
    Stenetorp, Pontus
    University College London, UK.
    AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages2024In: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 / [ed] Duh K.; Gomez H.; Bethard S., Association for Computational Linguistics (ACL) , 2024, p. 5997-6023, article id 200463Conference paper (Refereed)
1 - 33 of 33
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf