Gpt2-Chatbot Removed from Lmsys

https://lmsys.org/blog/2024-03-01-policy/

Our Mission

Chatbot Arena (lmarena.ai) is an open-source project developed by members from LMSYS and UC Berkeley SkyLab. Our mission is to advance LLM development and understanding through live, open, and community-driven evaluations. We maintain the open evaluation platform for any user to rate LLMs via pairwise comparisons under real-world use cases and publish leaderboard periodically.

Our Progress

Chatbot Arena was first launched in May 2023 and has emerged as a critical platform for live, community-driven LLM evaluation, attracting millions of participants and collecting over 800,000 votes. This extensive engagement has enabled the evaluation of more than 90 LLMs, including both commercial GPT-4, Gemini/Bard and open-weight Llama and Mistral models, significantly enhancing our understanding of their capabilities and limitations.

Our periodic leaderboard and blog post updates have become a valuable resource for the community, offering critical insights into model performance that guide the ongoing development of LLMs. Our commitment to open science is further demonstrated through the sharing of user preference data and one million user prompts, supporting research and model improvement.

We also collaborate with open-source and commercial model providers to bring their latest models to community for preview testing. We believe this initiative helps advancing the field and encourages user engagement to collect crucial votes for evaluating all the models in the Arena. Moreover, it provides an opportunity for the community to test and provide anonymized feedback before the models are officially released.

The platform's infrastructure (FastChat) and evaluation tools, available on GitHub, emphasize our dedication to transparency and community engagement in the evaluation process. This approach not only enhances the reliability of our findings but also fosters a collaborative environment for advancing LLMs.

In our ongoing efforts, we feel obligated to establish policies that guarantee evaluation transparency and trustworthiness. Moreover, we actively involve the community in shaping any modifications to the evaluation process, reinforcing our commitment to openness and collaborative progress.

Our Policy

Last Updated: May 31, 2024

Open source: The platform (FastChat) including UI frontend, model serving backend, model evaluation and ranking pipelines are all open source and available on GitHub. This means that anyone can clone, audit or run another instance of Chatbot Arena to produce a similar leaderboard.

Transparent: The evaluation process, including rating computation, identifying anomalous users, and LLM selection are all made publicly available so others can reproduce our analysis and fully understand the process of collecting data. Furthermore, we will involve the community in deciding any changes in the evaluation process.

Listing models on the leaderboard: The public leaderboard will only include models that are accessible to other third parties. Specifically, it will only include models that are either (1) open weights or/and (2) publicly available through APIs (e.g., gpt-4-0613, gemini-pro-api), or (3) available as a service (e.g., Bard, GPT-4+browsing). In the remainder of this document we refer to these models as publicly released models.

Once a publicly released model is listed on the leaderboard, the model will remain accessible at lmarena.ai for at least two weeks for the community to evaluate it.

Evaluating publicly released models. Evaluating such a model consists of the following steps:

  1. Add the model to Arena for blind testing and let the community know it was added.
  2. Accumulate enough votes until the model's rating stabilizes.
  3. Once the model's rating stabilizes, we list the model on the public leaderboard. There is one exception: the model provider can reach out before its listing and ask for an one-day heads up. In this case, we will privately share the rating with the model provider and wait for an additional day before listing the model on the public leaderboard.

Evaluating unreleased models: We collaborate with open-source and commercial model providers to bring their unreleased models to community for preview testing.

Model providers can test their unreleased models anonymously, meaning the models' names will be anonymized. A model is considered unreleased if its weights are neither open, nor available via a public API or service. Evaluating an unreleased model consists of the following steps:

  1. Add the model to Arena with an anonymous label. i.e., its identity will not be shown to users.
  2. Keep it until we accumulate enough votes for its rating to stabilize or until the model provider withdraws it.
  3. Once we accumulate enough votes, we will share the result privately with the model provider. These include the rating, as well as release samples of up to 20% of the votes. (See Sharing data with the model providers for further details).
  4. Remove the model from Arena.

If while we test an unreleased model, that model is publicly released, we immediately switch to the publicly released model evaluation process.

To ensure the leaderboard accurately reflects model rankings, we rely on live comparisons between models. Hence, we may deprecate models from the leaderboard one month after they are no longer available online or publicly accessible.

Sharing data with the community: We will periodically share data with the community. In particular, we will periodically share 20% of the arena vote data we have collected including the prompts, the answers, the identity of the model providing each answer (if the model is or has been on the leaderboard), and the votes. For the models we collected votes for but have never been on the leaderboard, we will still release data but we will label the model as "anonymous".

Sharing data with the model providers: Upon request, we will offer early data access with model providers who wish to improve their models. However, this data will be a subset of data that we periodically share with the community. In particular, with a model provider, we will share the data that includes their model's answers. For battles, we may not reveal the opponent model and may use "anonymous" label. This data will be later shared with the community during the periodic releases. If the model is not on the leaderboard at the time of sharing, the model’s answers will also be labeled as "anonymous". Before sharing the data, we will remove user PII (e.g., Azure PII detection for texts).

FAQ

Why another eval?

Most LLM benchmarks are static, which makes them prone to contamination, as these LLMs are trained on most available data on the Internet. Chatbot Arena aims to alleviate this problem by providing live evaluation with a continuous stream of new prompts from real people. We also believe that the open nature of the platform will attract users that accurately reflect the broader set of LLM users and real use cases.

What model to evaluate? Why not all?

We will continuously add new models and retire old ones. It is not feasible to add every possible model due to the cost and the scalability of our evaluation process, i.e., it might take too much to accumulate enough votes to accurately rate each model. Today, the decision to add new models is rather ad-hoc: we add models based on the community’s perceived interest. We intend to formalize his process in the near future.

Why should the community trust our eval?

We seek to provide transparency and all tools as well as the platform we are using in open-source. We invite the community to use our platform and tools to statistically reproduce our results.

Why do you only share 20% of data, not all?

Arena data is used for LLM benchmark purpose. We periodically share data to mitigate the potential risk of overfitting or benchmark leakage. We will actively review this policy based on the community's feedback.

Who will fund this effort? Any conflict of interests?

Chatbot Arena is only funded by gifts, in money, cloud credits, or API credits. The gifts have no strings attached.

Any feedback?

Feel free to send us email or leave feedback on Github!

{
"by": "synthwave",
"descendants": 11,
"id": 40214305,
"kids": [
40215273,
40215160,
40214306,
40215165
],
"score": 39,
"time": 1714501239,
"title": "Gpt2-Chatbot Removed from Lmsys",
"type": "story",
"url": "https://lmsys.org/blog/2024-03-01-policy/"
}
{
"author": null,
"date": null,
"description": "<h2><a id=“our-mission” class=“anchor” href=”#our-mission” aria-hidden=“true”><svg aria-hidden=“true” class=“octicon octicon-link” height=“16″ version=“1.1”…",
"image": "https://lmsys.org/images/blog/arena_policy/arena_logo_v0_4x3.png",
"logo": "https://logo.clearbit.com/lmsys.org",
"publisher": "lmsys.org",
"title": "LMSYS Chatbot Arena: Live and Community-Driven LLM Evaluation | LMSYS Org",
"url": "https://lmsys.org/blog/2024-03-01-policy"
}
{
"url": "https://lmsys.org/blog/2024-03-01-policy",
"title": "LMSYS Chatbot Arena: Live and Community-Driven LLM Evaluation | LMSYS Org",
"description": "Our Mission Chatbot Arena (lmarena.ai) is an open-source project developed by members from LMSYS and UC Berkeley SkyLab. Our mission is to advance LLM development and understanding through live, open, and...",
"links": [
"https://lmsys.org/blog/2024-03-01-policy",
"https://lmsys.org/blog/2024-03-01-policy/"
],
"image": "https://lmsys.org/images/blog/arena_policy/arena_logo_v0_4x3.png",
"content": "<div><h2>Our Mission</h2>\n<p>Chatbot Arena (<a target=\"_blank\" href=\"https://lmarena.ai/\">lmarena.ai</a>) is an open-source project developed by members from <a target=\"_blank\" href=\"https://lmarena.ai/?about\">LMSYS</a> and UC Berkeley SkyLab. Our mission is to advance LLM development and understanding through live, open, and community-driven evaluations. We maintain the open evaluation platform for any user to rate LLMs via pairwise comparisons under real-world use cases and publish <a target=\"_blank\" href=\"https://lmarena.ai/?leaderboard\">leaderboard</a> periodically.</p>\n<p><img src=\"https://lmsys.org/images/blog/arena_policy/arena_logo_v0_4x3.png\" /></p>\n<h2>Our Progress</h2>\n<p>Chatbot Arena was first launched in <a target=\"_blank\" href=\"https://lmsys.org/blog/2023-05-03-arena/\">May 2023</a> and has emerged as a critical platform for live, community-driven LLM evaluation, attracting millions of participants and collecting over 800,000 votes. This extensive engagement has enabled the evaluation of more than 90 LLMs, including both commercial GPT-4, Gemini/Bard and open-weight Llama and Mistral models, significantly enhancing our understanding of their capabilities and limitations.</p>\n<p>Our periodic <a target=\"_blank\" href=\"https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard\">leaderboard</a> and blog post updates have become a valuable resource for the community, offering critical insights into model performance that guide the ongoing development of LLMs. Our commitment to open science is further demonstrated through the sharing of <a target=\"_blank\" href=\"https://huggingface.co/datasets/lmsys/chatbot_arena_conversations\">user preference data</a> and <a target=\"_blank\" href=\"https://huggingface.co/datasets/lmsys/lmsys-chat-1m\">one million user prompts</a>, supporting research and model improvement.</p>\n<p>We also collaborate with open-source and commercial model providers to bring their latest models to community for preview testing. We believe this initiative helps advancing the field and encourages user engagement to collect crucial votes for evaluating all the models in the Arena. Moreover, it provides an opportunity for the community to test and provide anonymized feedback before the models are officially released.</p>\n<p>The platform's infrastructure (<a target=\"_blank\" href=\"https://github.com/lm-sys/FastChat\">FastChat</a>) and evaluation tools, available on GitHub, emphasize our dedication to transparency and community engagement in the evaluation process. This approach not only enhances the reliability of our findings but also fosters a collaborative environment for advancing LLMs.</p>\n<p>In our ongoing efforts, we feel obligated to establish policies that guarantee evaluation transparency and trustworthiness. Moreover, we actively involve the community in shaping any modifications to the evaluation process, reinforcing our commitment to openness and collaborative progress.</p>\n<h2>Our Policy</h2>\n<p>Last Updated: May 31, 2024</p>\n<p><strong>Open source</strong>: The platform (<a target=\"_blank\" href=\"https://github.com/lm-sys/FastChat\">FastChat</a>) including UI frontend, model serving backend, model evaluation and ranking pipelines are all open source and available on GitHub. This means that anyone can clone, audit or run another instance of Chatbot Arena to produce a similar leaderboard.</p>\n<p><strong>Transparent</strong>: The evaluation process, including rating computation, identifying anomalous users, and LLM selection are all made publicly available so others can reproduce our analysis and fully understand the process of collecting data. Furthermore, we will involve the community in deciding any changes in the evaluation process.</p>\n<p><strong>Listing models on the leaderboard</strong>: The public leaderboard will only include models that are accessible to other third parties. Specifically, it will only include models that are either (1) open weights or/and (2) publicly available through APIs (e.g., gpt-4-0613, gemini-pro-api), or (3) available as a service (e.g., Bard, GPT-4+browsing). In the remainder of this document we refer to these models as <strong>publicly released models</strong>.</p>\n<p>Once a publicly released model is listed on the leaderboard, the model will remain accessible at <a target=\"_blank\" href=\"https://lmarena.ai/\">lmarena.ai</a> for at least <strong>two weeks</strong> for the community to evaluate it.</p>\n<p><strong>Evaluating publicly released models</strong>. Evaluating such a model consists of the following steps:</p>\n<ol>\n<li>Add the model to Arena for blind testing and let the community know it was added.</li>\n<li>Accumulate enough votes until the model's rating stabilizes.</li>\n<li>Once the model's rating stabilizes, we list the model on the public leaderboard. There is one exception: the model provider can reach out before its listing and ask for an one-day heads up. In this case, we will privately share the rating with the model provider and wait for an additional day before listing the model on the public leaderboard.</li>\n</ol>\n<p><strong>Evaluating unreleased models</strong>: We collaborate with open-source and commercial model providers to bring their unreleased models to community for preview testing.</p>\n<p>Model providers can test their unreleased models anonymously, meaning the models' names will be anonymized. A model is considered unreleased if its weights are neither open, nor available via a public API or service. Evaluating an unreleased model consists of the following steps:</p>\n<ol>\n<li>Add the model to Arena with an anonymous label. i.e., its identity will not be shown to users.</li>\n<li>Keep it until we accumulate enough votes for its rating to stabilize or until the model provider withdraws it.</li>\n<li>Once we accumulate enough votes, we will share the result privately with the model provider. These include the rating, as well as release samples of up to 20% of the votes. (See Sharing data with the model providers for further details).</li>\n<li>Remove the model from Arena.</li>\n</ol>\n<p>If while we test an unreleased model, that model is publicly released, we immediately switch to the publicly released model evaluation process.</p>\n<p>To ensure the leaderboard accurately reflects model rankings, we rely on live comparisons between models. Hence, we may deprecate models from the leaderboard one month after they are no longer available online or publicly accessible.</p>\n<p><strong>Sharing data with the community</strong>: We will periodically share data with the community. In particular, we will periodically share 20% of the arena vote data we have collected including the prompts, the answers, the identity of the model providing each answer (if the model is or has been on the leaderboard), and the votes. For the models we collected votes for but have never been on the leaderboard, we will still release data but we will label the model as \"anonymous\".</p>\n<p><strong>Sharing data with the model providers</strong>: Upon request, we will offer early data access with model providers who wish to improve their models. However, this data will be a subset of data that we periodically share with the community. In particular, with a model provider, we will share the data that includes their model's answers. For battles, we may not reveal the opponent model and may use \"anonymous\" label. This data will be later shared with the community during the periodic releases. If the model is not on the leaderboard at the time of sharing, the model’s answers will also be labeled as \"anonymous\". Before sharing the data, we will remove user PII (e.g., Azure PII detection for texts).</p>\n<h2>FAQ</h2>\n<h3>Why another eval?</h3>\n<p>Most LLM benchmarks are static, which makes them prone to contamination, as these LLMs are trained on most available data on the Internet. Chatbot Arena aims to alleviate this problem by providing live evaluation with a continuous stream of new prompts from real people. We also believe that the open nature of the platform will attract users that accurately reflect the broader set of LLM users and real use cases.</p>\n<h3>What model to evaluate? Why not all?</h3>\n<p>We will continuously add new models and retire old ones. It is not feasible to add every possible model due to the cost and the scalability of our evaluation process, i.e., it might take too much to accumulate enough votes to accurately rate each model. Today, the decision to add new models is rather ad-hoc: we add models based on the community’s perceived interest. We intend to formalize his process in the near future.</p>\n<h3>Why should the community trust our eval?</h3>\n<p>We seek to provide transparency and all tools as well as the platform we are using in open-source. We invite the community to use our platform and tools to statistically reproduce our results.</p>\n<h3>Why do you only share 20% of data, not all?</h3>\n<p>Arena data is used for LLM benchmark purpose. We periodically share data to mitigate the potential risk of overfitting or benchmark leakage. We will actively review this policy based on the community's feedback.</p>\n<h3>Who will fund this effort? Any conflict of interests?</h3>\n<p>Chatbot Arena is only funded by gifts, in money, cloud credits, or API credits. The gifts have no strings attached.</p>\n<h2>Any feedback?</h2>\n<p>Feel free to send us email or leave feedback on <a target=\"_blank\" href=\"https://github.com/lm-sys/FastChat/issues\">Github</a>!</p>\n</div>",
"author": "",
"favicon": "https://lmsys.org/favicon.jpeg",
"source": "lmsys.org",
"published": "",
"ttr": 257,
"type": "website"
}