Highlights of Guild AI
At-a-glance run comparisons
Guild summarizes important run results and presents them in a sortable, filterable table for quick access. Results are updated in real time to keep you updated on your current experiment while comparing it to previous runs.
Compare model training and system statistics
Guild lets you dig deeper into run results by comparing TensorFlow scalars and system statistics. Guild has integrated TensorBoard’s charting component which lets you compare series data by global step, relative time, and wall time. It also supports series smoothing and logarithmic Y axis.
Capture more data
In addition to your TensorFlow event logs, which contain your training statistics and other summaries, Guild captures a range of other essential build artifacts including script output, command flags, system attributes and stats such as GPU and CPU utilization. This information is indispensable for answering certain questions, particularly those concerning operational performance.
When you need to drill into even more detail, Guild seamlessly
integrates TensorBoard into your project view. There’s no need to
start a separate TensorBoard process — Guild handles this in the
background when you run the
view command. TensorBoard lets you view
event log summaries including scalars, images, audio, variable
distributions and histograms, and an interactive view of the model
Simplified training workflow
Guild workflow consists of simple commands that you run for a project:
serve. Guild fills in the
details for each command using information from the Guild project
file, letting you run complex operations typing long, complex
Self documenting project structure
Guild projects provide instructions for performing the prepare, train, and evaluate operations. Project are plain text files that are easy for humans to read. They are useful not only to Guild for running commands but as model interface specifications.
Integrated inference server
Guild provides an integrated HTTP server that you can use to test your trained models before deploying them to TensorFlow serving.