Building a Large Japanese Web Corpus for Large Language Models

https://arxiv.org/abs/2404.17733

Computer Science > Computation and Language

arXiv:2404.17733 (cs)

View a PDF of the paper titled Building a Large Japanese Web Corpus for Large Language Models, by Naoaki Okazaki and 9 other authors

View PDF HTML (experimental)

Abstract:Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus by extracting and refining text from the Common Crawl archive (21 snapshots of approximately 63.4 billion pages crawled between 2020 and 2023). This corpus consists of approximately 312.1 billion characters (approximately 173 million pages), which is the largest of all available training corpora for Japanese LLMs, surpassing CC-100 (approximately 25.8 billion characters), mC4 (approximately 239.7 billion characters) and OSCAR 23.10 (approximately 74 billion characters). To confirm the quality of the corpus, we performed continual pre-training on Llama 2 7B, 13B, 70B, Mistral 7B v0.1, and Mixtral 8x7B Instruct as base LLMs and gained consistent (6.6-8.1 points) improvements on Japanese benchmark datasets. We also demonstrate that the improvement on Llama 2 13B brought from the presented corpus was the largest among those from other existing corpora.

Submission history

From: Naoaki Okazaki [view email]
[v1] Sat, 27 Apr 2024 00:02:45 UTC (307 KB)

Full-text links:

Access Paper:

Current browse context:

cs.CL

export BibTeX citation

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Code, Data and Media Associated with this Article

Demos

Recommenders and Search Tools

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

{
"by": "PaulHoule",
"descendants": 29,
"id": 40217699,
"kids": [
40221267,
40219765
],
"score": 76,
"time": 1714519552,
"title": "Building a Large Japanese Web Corpus for Large Language Models",
"type": "story",
"url": "https://arxiv.org/abs/2404.17733"
}
{
"author": "Naoaki Okazaki",
"date": "2024-04-27T12:00:00.000Z",
"description": "Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus by extracting and refining text from the Common Crawl archive (21 snapshots of approximately 63.4 billion pages crawled between 2020 and 2023). This corpus consists of approximately 312.1 billion characters (approximately 173 million pages), which is the largest of all available training corpora for Japanese LLMs, surpassing CC-100 (approximately 25.8 billion characters), mC4 (approximately 239.7 billion characters) and OSCAR 23.10 (approximately 74 billion characters). To confirm the quality of the corpus, we performed continual pre-training on Llama 2 7B, 13B, 70B, Mistral 7B v0.1, and Mixtral 8x7B Instruct as base LLMs and gained consistent (6.6-8.1 points) improvements on Japanese benchmark datasets. We also demonstrate that the improvement on Llama 2 13B brought from the presented corpus was the largest among those from other existing corpora.",
"image": "https://arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png",
"logo": "https://logo.clearbit.com/arxiv.org",
"publisher": "arXiv.org",
"title": "Building a Large Japanese Web Corpus for Large Language Models",
"url": "https://arxiv.org/abs/2404.17733v1"
}
{
"url": "https://arxiv.org/abs/2404.17733",
"title": "Building a Large Japanese Web Corpus for Large Language Models",
"description": "Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese...",
"links": [
"https://arxiv.org/abs/2404.17733v1",
"https://arxiv.org/abs/2404.17733"
],
"image": "https://static.arxiv.org/icons/twitter/arxiv-logo-twitter-square.png",
"content": "<div>\n <div>\n <p>\n </p><h2>Computer Science &gt; Computation and Language</h2>\n <p></p>\n <p><strong>arXiv:2404.17733</strong> (cs)\n </p>\n<div>\n <p>View a PDF of the paper titled Building a Large Japanese Web Corpus for Large Language Models, by Naoaki Okazaki and 9 other authors</p>\n <p><a target=\"_blank\" href=\"https://arxiv.org/pdf/2404.17733\">View PDF</a>\n <a target=\"_blank\" href=\"https://arxiv.org/html/2404.17733v1\">HTML (experimental)</a></p><blockquote>\n <span>Abstract:</span>Open Japanese large language models (LLMs) have been trained on the Japanese portions of corpora such as CC-100, mC4, and OSCAR. However, these corpora were not created for the quality of Japanese texts. This study builds a large Japanese web corpus by extracting and refining text from the Common Crawl archive (21 snapshots of approximately 63.4 billion pages crawled between 2020 and 2023). This corpus consists of approximately 312.1 billion characters (approximately 173 million pages), which is the largest of all available training corpora for Japanese LLMs, surpassing CC-100 (approximately 25.8 billion characters), mC4 (approximately 239.7 billion characters) and OSCAR 23.10 (approximately 74 billion characters). To confirm the quality of the corpus, we performed continual pre-training on Llama 2 7B, 13B, 70B, Mistral 7B v0.1, and Mixtral 8x7B Instruct as base LLMs and gained consistent (6.6-8.1 points) improvements on Japanese benchmark datasets. We also demonstrate that the improvement on Llama 2 13B brought from the presented corpus was the largest among those from other existing corpora.\n </blockquote>\n </div>\n <div>\n <h2>Submission history</h2><p> From: Naoaki Okazaki [<a target=\"_blank\" href=\"https://arxiv.org/show-email/f7f0e47f/2404.17733\">view email</a>] <br /> <strong>[v1]</strong>\n Sat, 27 Apr 2024 00:02:45 UTC (307 KB)<br />\n</p></div>\n </div>\n<div> <div>\n <p><a></a>\n <span>Full-text links:</span></p><h2>Access Paper:</h2>\n <ul>\n <p>\nView a PDF of the paper titled Building a Large Japanese Web Corpus for Large Language Models, by Naoaki Okazaki and 9 other authors</p><li><a target=\"_blank\" href=\"https://arxiv.org/pdf/2404.17733\">View PDF</a></li><li><a target=\"_blank\" href=\"https://arxiv.org/html/2404.17733v1\">HTML (experimental)</a></li><li><a target=\"_blank\" href=\"https://arxiv.org/src/2404.17733\">TeX Source</a></li><li><a target=\"_blank\" href=\"https://arxiv.org/format/2404.17733\">Other Formats</a></li></ul>\n </div>\n <div><p>\n Current browse context: </p><p>cs.CL</p>\n </div>\n<p>\n <span>export BibTeX citation</span>\n</p>\n<div>\n <p></p><h3>Bookmark</h3><p></p><p><a target=\"_blank\" href=\"http://www.bibsonomy.org/BibtexHandler?requTask=upload&amp;url=https://arxiv.org/abs/2404.17733&amp;description=Building%20a%20Large%20Japanese%20Web%20Corpus%20for%20Large%20Language%20Models\" title=\"Bookmark on BibSonomy\">\n <img src=\"https://arxiv.org/static/browse/0.3.4/images/icons/social/bibsonomy.png\" alt=\"BibSonomy logo\" />\n </a>\n <a target=\"_blank\" href=\"https://reddit.com/submit?url=https://arxiv.org/abs/2404.17733&amp;title=Building%20a%20Large%20Japanese%20Web%20Corpus%20for%20Large%20Language%20Models\" title=\"Bookmark on Reddit\">\n <img src=\"https://arxiv.org/static/browse/0.3.4/images/icons/social/reddit.png\" alt=\"Reddit logo\" />\n </a>\n</p></div> </div>\n<div><p>\n <label>Bibliographic Tools</label></p><div>\n <h2>Bibliographic and Citation Tools</h2>\n <div>\n <p><label>\n <span></span>\n <span>Bibliographic Explorer Toggle</span>\n </label>\n </p>\n </div>\n </div>\n <p>\n <label>Code, Data, Media</label></p><div>\n <h2>Code, Data and Media Associated with this Article</h2>\n </div>\n <p>\n <label>Demos</label></p><div>\n <h2>Demos</h2>\n </div>\n <p>\n <label>Related Papers</label></p><div>\n <h2>Recommenders and Search Tools</h2>\n </div>\n <p>\n <label>\n About arXivLabs\n </label></p><div>\n <h2>arXivLabs: experimental projects with community collaborators</h2>\n <p>arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.</p>\n <p>Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.</p>\n <p>Have an idea for a project that will add value for arXiv's community? <a target=\"_blank\" href=\"https://info.arxiv.org/labs/index.html\"><strong>Learn more about arXivLabs</strong></a>.</p>\n </div>\n </div>\n</div>",
"author": "",
"favicon": "https://arxiv.org/static/browse/0.3.4/images/icons/favicon-16x16.png",
"source": "arxiv.org",
"published": "",
"ttr": 73,
"type": "website"
}