Digitala Vetenskapliga Arkivet

Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A Few Thousand Translations Go A Long Way! Leveraging Pre-trained Models for African News Translation
Saarland Univ, Saarbrucken, Germany..
INRIA, Paris, France..
Meta AI, Menlo Pk, CA USA..
Google Res, Mountain View, CA USA..
Visa övriga samt affilieringar
2022 (Engelska)Ingår i: NAACL 2022: The 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Stroudsburg: Association for Computational Linguistics, 2022, s. 3053-3070Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Recent advances in the pre-training of language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages are not well represented on the web and therefore excluded from the large-scale crawls used to create datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pre-training? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a new African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both to additional languages and to additional domains is to fine-tune large pre-trained models on small quantities of high-quality translation data.

Ort, förlag, år, upplaga, sidor
Stroudsburg: Association for Computational Linguistics, 2022. s. 3053-3070
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Identifikatorer
URN: urn:nbn:se:uu:diva-489248ISI: 000859869503014ISBN: 978-1-955917-71-1 (tryckt)OAI: oai:DiVA.org:uu-489248DiVA, id: diva2:1714241
Konferens
Conference of the North-American-Chapter-of-the-Association-for-Computational-Linguistics (NAAACL) - Human Language Technologies, JUL 10-15, 2022, Seattle, WA
Forskningsfinansiär
EU, Horisont 2020, 3081705EU, Horisont 2020, 833635Tillgänglig från: 2022-11-29 Skapad: 2022-11-29 Senast uppdaterad: 2023-02-06Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Av organisationen
Institutionen för lingvistik och filologi
Språkteknologi (språkvetenskaplig databehandling)

Sök vidare utanför DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 98 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf