Introducing Fine-Tuning on NightCafe

Introducing Fine-Tuning on NightCafe

Fine-tuning is a feature that offers a more personalized AI experience. This allows you to train a model to understand and recreate specific styles, faces, or objects.


How to Get Started with Fine-Tuning

  1. Access My Models: 

    PC: Find it in the main menu header.

    Mobile: Open the dropdown menu by tapping on your profile picture (top right corner) and select “My Models”.

  2. Click on Fine-tune a new model

  3. Choose your Model Type: Options include face, object or animal, and style. Name your model for further use.


    Note: Fine-tuning is a PRO-only feature, but free users get 1 free face-model tune and 10 generations with it.

  4. Choose images for model training:

    Click on Choose or Create Dataset - here you can upload or choose images from your existing library that you want to use for training your model.

    Start with at least 20. For more effective learning, diverse and numerous images are preferred. Once set, scroll down, agree to the terms, and click on "start training".

    Note: A dataset is a set of images that are used to train a model.

  5. Wait for Training Completion:
    This typically takes 10 to 30 minutes. You will get a notification once it’s done. Access your trained model either from the "Model" picker in NightCafe Studio or the "My Models" page.

  6. Use Your Model:
    Include your model's "token" in your prompt (guidance provided below the prompt field). Then, unleash your creativity!

Things to Note

  1. Non-PRO users get 10 free PRO generations after training a model. To continue using the model, consider upgrading to NightCafe PRO.
  2. Our Fine-tuning feature is still in BETA. This means there might be hiccups, and we can't guarantee perfect results every time.
  3. Fine-tuning a model doesn't automatically produce images. Once your model is ready, you can use it as you please to make your creations.
  4. Fine-tuning restrictions for residents of state of Illinois, USA. 

Privacy Matters

Your models are private and safe. You are the sole user who can use them.

To contact NightCafe Staff directly, reach out directly via the feedback/support form. 


Using the finetuned model in the prompt

What is LoRA?

"lora" or "LoRA" is the type of finetuned model that you can train on NightCafe. It is an acronym for "Low Rank Adaption" and is a method for quickly fine-tuning models on a small dataset. Simply said, the LoRA training model makes it easier to train Stable Diffusion on different concepts, such as characters or a specific style. 


What is token and how to use it?

When you use a finetuned model, you need to add the token for that model to your prompt so that it can be interpreted in the context of the rest of the prompt.

The token is in the format <{type}:{name}:{optional weight}>. An example is <lora:My Face:0.8>. This allows you to write a prompt like "A photo of <lora:My Face:0.8> riding an elephant", or "A unicorn in the style of <lora:Dark Fantasy:0.5>".


The weight is optional and can be omitted. Weights should usually be between 0 and 1. If omitted it will default to 0.8. E.g. <lora:My Face> will be interpreted as <lora:My Face:0.8>. In the future there might be more types of models, which is why it's used as part of the token.



    • Related Articles

    • The Coherent algorithm - CLIP-Guided Diffusion text-to-image AI Art Generator

      NightCafe now also offers a second text-to-image AI algorithm called CLIP-Guided Diffusion, which we refer to as the "Coherent" algorithm. This algorithm is fantastic at generating more realistic images, composed in a believable way, to look more ...
    • Stable Diffusion AI Image Generator

      Stable Diffusion is the new darling of the AI Art world. This algorithm is one of the latest additions to NightCafe and is even more coherent than the "Coherent" algorithm. "Stable" is short for "Stable Diffusion" - an open-source algorithm and model ...
    • DALL·E2

      What is DALL-E? The DALL·E 2 algorithm is an evolution of the original DALL-E by OpenAI. DALL-E kicked off the whole text-to-image scene by demonstrating that it was possible. DALL-E 2 brings a whole new level of coherence, resolution, and prompt ...
    • What is Inpainting in AI and how to use it?

      What is Inpainting in AI? Inpainting is a technique of filling in missing regions of images that involves filling in the missing or damaged parts of an image, or removing the undesired object to construct a complete image. However with the power of ...
    • Why is SDXL 1.0 PRO-only?

      A common complaint about SDXL 0.9 was that it's too expensive. By making SDXL 1.0 PRO-only, we've been able to drop the cost to 1 credit per image, meaning you can make 3x as many images for the same number of credits. SDXL is a much larger model ...