Stable Diffusion - TextToImage TypeScript SDK Inferences
Stable Diffusion using the TypeScript SDK
For examples of the types of outputs to expect, you can visit the Stable Diffusion Demo at OctoAI.
This guide will cover the basics of running an inference to generate an image using the TypeScript SDK, including using our healthCheck method to ensure the endpoint is healthy before sending it any requests.
You can expand on this by utilizing the SDK with other frameworks to display your generated images, however that is outside of the scope of this guide.
As a next step, Stable Diffusion is also an excellent template to use with Asynchronous Inference Using the TypeScript SDK.
Requirements
- Please follow How to create an OctoAI API token if you don't have one already.
- Please follow the TypeScript SDK Installation & Setup guide if you haven't already.
Stable Diffusion: TextToImage
Let’s use another QuickStart pre-accelerated template example. Stable Diffusion is a model that generates images when it is given a prompt. You can create an image using the code snippet below, which includes LoRAs and text inversion to improve the quality of the results.
Please reference the QuickStart Templates on the TypeScript SDK for details on finding endpoint URLs for QuickStart and cloned templates and the TypeScript SDK Reference for more information on specific methods.
import { Client } from "@octoai/client";
import * as fs from "fs";
// You can set your OCTOAI_TOKEN to the client manually
// const client = new Client(YOUR_TOKEN_HERE);
// Otherwise, the client will also identify if OCTOAI_TOKEN is set as an environment variable
const client = new Client();
const sdEndpoint = "https://stable-diffusion-demo-kk0powt97tmb.octoai.run/text2img";
const sdHealthCheck = "https://stable-diffusion-demo-kk0powt97tmb.octoai.run/healthcheck";
const inputs = {
// The prompt input is required, while the rest are optional
"prompt":"A cute photo of a poodle puppy on a laptop.",
// What we don't want to see
"negative_prompt":"Blurry photo, distortion, low-res, bad quality",
// Classifier free guidance, 1 to 20
"guidance_scale":7.5,
// Number of denoising steps, 1 to 500
"num_inference_steps":30,
// Please view the QuickStart templates page for stable diffusion for more information
// regarding LoRAs, text_inversion and model.
"model": "Realistic_Vision",
"lora": {"creatures-design-v1": 0.5,"pokemon_v3_offset": 0.5},
"text_inversion": {"FastNegativeEmbedding": "FastNegativeV2"},
"height": 512,
"width": 512,
// Algorithm used for denoising. Also accepts PNDM, KLMS, DDIM, etc.
"scheduler":"DPMSolverMultistep"}
async function writeSDOutputsToFile() {
if (await client.healthCheck(sdHealthCheck) == 200) {
const outputs: any = await client.infer(sdEndpoint, inputs);
let buffer = Buffer.from(outputs.completion.image_0, "base64");
fs.writeFileSync("cute_poodle.png", buffer);
}
}
writeSDOutputsToFile().then();
Once you’ve ran this program, you'll have made an AI generated image, converted from a base64 string and then written to a file. You can open cute_poodle.png
to see the results!

cute_poodle.png
Stable Diffusion Outputs
In the example above, if you log the outputs, you'll receive an object like the below. We recommend using interfaces for your expected outputs to help guide the behavior of your application.
{
ckpt_load_time_ms: 0.00371,
run_pipeline_time_ms: 1278.74618,
completion: {
image_0: 'base64 string representing the image generated'
},
prediction_time_ms: 1371.803945
}
Updated 7 days ago