ai2-olmo-eval0.8.6
ai2-olmo-eval0.8.6
Published
In-loop evaluation tasks for language modeling
pip install ai2-olmo-eval
Package Downloads
Requires Python
>=3.9
Dependencies
- torch
- torchmetrics
- datasets
<4,>=3.6.0 - cached-path
- requests
- packaging
- importlib_resources
- tokenizers
<0.20,>=0.19.1 - pyarrow
<20,>=19.0 - ruff
; extra == "dev" - mypy
<1.4,>=1.0; extra == "dev" - black
<24.0,>=23.1; extra == "dev" - isort
<5.13,>=5.12; extra == "dev" - pytest
; extra == "dev" - twine
>=1.11.0; extra == "dev" - setuptools
; extra == "dev" - wheel
; extra == "dev" - build
; extra == "dev" - boto3
; extra == "dev" - google-cloud-storage
; extra == "dev" - ai2-olmo-eval
[dev]; extra == "all"
OLMo-in-loop-evals
Code for in-loop evaluation tasks used by the OLMo training team.
Installation
pip install ai2-olmo-eval
Release process
Steps
-
Update the version in
src/olmo_eval/version.py. -
Run the release script:
./src/scripts/release.shThis will commit the changes to the CHANGELOG and
version.pyfiles and then create a new tag in git which will trigger a workflow on GitHub Actions that handles the rest.
Fixing a failed release
If for some reason the GitHub Actions release workflow failed with an error that needs to be fixed, you'll have to delete the tag on GitHub. Once you've pushed a fix you can simply repeat the steps above.