This week’s paper explores EvalGen, a mixed-initative approach to aligning LLM-generated evaluation functions with human preferences. EvalGen assists users in developing both criteria acceptable LLM outputs and developing functions to check these standards, ensuring evaluations reflect the users’ own grading standards.
Read it on the blog: https://arize.com/blog/breaking-down-evalgen-who-validates-the-validators/
Learn more about AI observability and evaluation in our course, join the Arize AI Slack community or get the latest on LinkedIn and X.