AI Agents
Bulk Testing
9 min
bulk test feature enables users to validate ai agent performance at scale up to 1000 user questions can be added manually or via csv upload the tool generates ai responses in a tabular view where users can rate outputs, trigger re tests, and train the ai agent directly this accelerates knowledge validation across large support datasets prerequisites active enjo workspace with at least one configured ai agent access to ai agent studio permissions to train the ai agent and view test results how to use bulk test step 1 navigate to bulk test go to enjo workspace → select ai agent click the bulk test tab step 2 add questions you can add questions in two ways option a manual entry click add questions manually enter up to 20 questions per input session click add to append them to the test list option b csv upload click upload csv file upload a file with one column of questions the system parses and adds valid entries to the batch maximum combined limit (manual + csv) 1000 questions error displays if the limit is exceeded step 3 run the test after adding questions, click run test enjo processes inputs and generates responses step 4 review test results results appear in a table with the following columns column description user query input question status answered / unanswered feedback user rating helpful / not helpful / not rated actions and controls re run test re execute for all or only unanswered questions rate response mark as helpful / not helpful view details expand to see full ai response train ai agent add answer, use existing answer, or apply ai action dismiss query remove from queue best practices use csv upload for batches over 100 questions rate responses promptly to improve training suggestions re run tests after knowledge updates or training actions limitations max 1000 questions per batch only one active test batch per ai agent at a time