Skip to main content

· 4 min read
DahnM20

AI-Flow is a tool designed to simplify and automate your AI workflows by connecting various services and tools into a unified flow. This guide will help you get started with AI-Flow, including adding nodes, connecting them, and customizing your workspace for an optimized workflow.

Adding and Connecting Nodes

To build your AI workflow, nodes can be added to the canvas using a simple drag-and-drop interface. Here's a quick overview of how to manage nodes:

  • Handles: In AI-Flow, input and output connections are visualized through handles:
    • Round handles represent input connections.
    • Square handles represent output connections.
  • Handle Color Coding:
    • Blue input are optional.
    • Red input are mandatory and must be connected (or filled) for the node to function.

For some nodes, values can either be entered directly into the field or provided through a handle. If a handle is connected to a field, the input field disappears, leaving only the handle.


Example Node connection

Here’s a basic example:

  • Both methods yield the same result.
  • The context field is optional, allowing the node to function without it.
  • The prompt field is mandatory and must be either filled in or connected to another node.

Types of Nodes

AI-Flow offers a wide variety of nodes to suit different needs. Below is a general overview of the node categories:

  • Inputs: Nodes that bring external data into your flow.
  • Models: These nodes connect to AI models provided by services such as OpenAI, StabilityAI, and Replicate.
  • Tools: Nodes designed to manipulate data and structure your workflow.
  • API Builder: These nodes enable your flow to be accessed via API calls. Learn more about this feature in the API Builder documentation.

To dive deeper into the functionality of a specific node, use the help action within the node for detailed descriptions, demos, and related resources.

Help Action

File Upload Node

The File Upload node is used to upload a file into the workflow. The node returns a URL that links to the uploaded file.

It's important to note that if you upload a PDF file, the output of the File Upload node will not contain the text content of the PDF itself. To extract the text from the document, follow the upload with a Document-to-Text node, which will process the file and return its text content.

File Upload Node

Opening the Right-Side Pane

Help Action

The right-side pane in AI-Flow provides additional functionality to enhance your workflow management. Here’s what you can do when the pane is open:

  • View Outputs: See a comprehensive list of all outputs generated by the nodes in your flow.
  • Edit Nodes: Directly edit any selected node, even if the node is minimized on the canvas.
  • Disable Auto-Save: Choose to disable the automatic cloud save feature if preferred.
  • Save and Import Flows: You can save your current flow as a .json file for future use or import a previously exported flow.
  • API Management: Manage your API settings and configurations directly from this pane.

This feature is essential for keeping your workflow organized and accessible while providing quick access to critical actions.

Customizing Your Experience

You can tailor the AI-Flow interface to fit your needs:

  • Access the settings to customize which nodes are displayed on the app.
  • The minimap can be toggled on or off to suit your preference.

Note that new nodes may be added over time but may not appear by default. Stay updated with news on the Home page and adjust your display settings to include any newly added nodes that fit your workflow.

Additional Resources

For more detailed information, refer to the following resources:

· 2 min read
DahnM20

AI-Flow empowers users to automate complex AI workflows by connecting various tools, models, and data sources. Through the Replicate Node in AI-Flow, you can easily access, select, and utilize models from Replicate to enhance your AI workflows.

Replicate Node Overview

The Replicate Node in AI-Flow serves as a gateway to a multitude of open-source AI models available on the Replicate platform. Replicate allows community members to host and run models in the cloud, and AI-Flow makes it simple to integrate these models into your workflows.

With the Replicate Node, you gain access to a wide variety of models, including text generators, image creators, video processors, and more.

Example Node connection

Spotlight Models and Categories

AI-Flow’s Replicate Node features a curated selection of the most popular models to help users get started efficiently. These "spotlight" models are displayed in the interface for easy access. However, the complete Replicate catalog offers a vast array of additional models that cannot be fully represented within the interface. If you require a specific model not listed, you can easily search for it on the Replicate website and integrate it into AI-Flow by entering the model's ID.

Model Popup

The categorized interface allows for quick navigation, whether you're seeking models for text generation, image creation, or other specialized tasks. However, not all models are fully compatible with AI-Flow due to the diversity in functionality and support across community-hosted models. Despite this, the Replicate Node is designed to make the integration process as seamless as possible, ensuring that you can leverage a wide range of models efficiently within your workflow.

· 4 min read
DahnM20

Unleashing the Power of AI Workflow with API Builder Nodes

Streamlining and integrating AI workflows is now more accessible with the advanced capabilities of the AI-Flow API. By leveraging the API Builder, developers can create robust AI flows, ensuring seamless integration and interaction between various AI models like GPT, DALL-E, Claude, Stable Diffusion, or any Replicate model. This article delves into the core features of the AI-Flow API Builder, demonstrating its benefits and ease of use.

API Builder Overview

Streamline Your AI Flow with API Input and Output Nodes

API Input Node: The API Input Node is designed to define the inputs for your API, mapping each field in the request body to a corresponding node in your flow. By setting default values, developers can make certain parameters optional.

API Input Node Example

Example Configuration:

{
"my_prompt": "Lorem Ipsum",
"my_context": "Lorem Ipsum"
}

This configuration showcases how inputs are structured, making it straightforward to initiate the flow with clear, defined parameters.


API Output Node: Configuring the API Output Node is very simple. This node specifies the names of the fields in the final response, ensuring the output is structured and understandable. Multiple output nodes can be set to pass additionnal or intermediate results.

API Output Node Example

In this simple example, the API response will be formatted as followed:

{
"my_output": "Lorem Ipsum dolor sit amet, consectetur"
}

This example demonstrates the simplicity of output configuration, providing a clear and concise response structure.

Manage and Monitor Your API with the API Builder View

The API Builder View is your command center for managing and monitoring your AI Workflow API. Accessible through the right pane of the app, this view provides a comprehensive overview of your API configuration, allowing you to generate and manage API Keys seamlessly.

API Builder View

Generating API Keys: To ensure secure access, API Keys are generated within the API Builder. These keys, essential for authorizing requests, are displayed only once to maintain security. Including these keys in your requests as an Authorization header is crucial for successful API calls.

Running Your Flow through the API: Launching your flow is straightforward with the provided code snippets in the API Builder View. For instance, using cURL, you can initiate your flow as follows:

curl https://api.ai-flow.com/v1/flow/<your_flow_id>/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AI_FLOW_API_KEY" \
-d '{
"my_prompt": "Lorem Ipsum",
"my_context": "Continue my sentence with 5 words of lorem ipsum"
}'

This command initiates the flow, returning a run ID to track the process. Retrieve the results using this ID once the processing completes.

Enhance Integration with Webhook Nodes

The Webhook Node is a versatile tool within the API Builder, enabling you to send outputs to designated URLs. Configuring the Webhook Node involves specifying the target URL and selecting the outputs to send, with the option to include custom signatures for enhanced security.

Webhook Node Example

In this case, the webhook will send the following data :

{
"my_output": "Lorem Ipsum dolor sit amet, consectetur"
}

In this configuration, the Webhook Node sends structured data to the specified URL, ensuring smooth integration and authentication via custom signatures.

Conclusion

The AI Workflow API, powered by the API Builder Nodes, offers a streamlined, efficient way to create and manage AI flows. With intuitive nodes for input and output, API management tools, and flexible webhook configurations, developers can build powerful AI workflows tailored to their needs.

Additional Resources

For more detailed information, refer to the following resources:

· 4 min read
DahnM20

Generate Consistent Characters Using AI: A Comprehensive Guide

Are you looking to create consistent and cohesive characters in your AI-generated images? This guide will walk you through practical methods to achieve uniformity in your AI character generation, part of our broader challenge on How to Automate Story Creation.

The Challenge of Consistent AI Image Generation

AI-powered image generation is a powerful tool, but it often introduces a level of randomness. This means you might need to generate images multiple times to get a convincing result. This guide doesn't present state-of-the-art techniques but rather shares my own experiments to help you achieve more consistent character images.

While the methods discussed are not foolproof, they represent a series of experiments that can guide you in developing your own approach to consistent AI character generation.

Method 1: Precise Prompt Descriptions

One of the keys to successful image generation is crafting high-quality prompts. If your descriptions are precise and consistent, you’re more likely to achieve similar results across multiple images.

Given our challenges with precision, we’ll use AI to assist in generating detailed descriptions. For example, I started with an image previously generated and asked ChatGPT to describe it accurately. This description was then used as a prompt for Stable Diffusion 3.

First Generation

Despite some similarities, the AI missed certain details, such as the character's age. By updating the prompt to specify that the character is 16 years old, we achieve better consistency.

Second Generation

In this iteration, the AI misinterpreted the hair color due to lighting effects in the original image. Using StabilityAI’s Search and Replace feature, I swapped red hair for brown hair and refined the description.

Third Generation

Here's a quick fix for the character's pet, again using the Search and Replace feature.

Fourth Generation

With the revised prompt, including specific details about hair color and other features, the results are more consistent at the beginning in the new iteration.

Method 2: Creating a Consistent Face Template

Once you have a consistent character concept, ensuring the face remains consistent across different angles and expressions can be challenging. To address this, create a clear face template that can be used to correct other images.

Using the same method, generate a close-up portrait of the character:

Portrait Generation

Next, use models like fofr/consistent-character with the Replicate Node to generate various face angles. This model helps maintain consistency in facial features across different poses.

Face Angle Generation

Although we lost some of the digital painting fantasy vibe, the model ensures facial consistency, which can be invaluable for face-swapping in illustrations. We can maybe find a way to reintroduce it later.

Conclusion and Next Steps

This guide provides a starting point for achieving consistency in AI-generated characters. By refining prompts and creating consistent face templates, you can produce more cohesive and believable character images.

Stay tuned for Part 2, where we’ll explore additional methods to refine and complete your character generation process.

Start experimenting with these methods today using AI-FLOW.


By incorporating these strategies, you’ll be on your way to mastering consistent character generation in AI. For more in-depth techniques and examples, be sure to follow our blog and check out the next part of this series.

· 4 min read
DahnM20

How to Automate Story Creation Using AI-FLOW - Part 2

This is the second installment of our challenge on How to Automate Story Creation.

In this part, we will focus on building a chapter and automating illustration generation.

Writing the First Chapter

In the previous part, we have created a plan of the story with three chapters, and a small summary for each. We could split the plan into three chunks, but for simplicity, I'll keep chapters as a single block. This approach helps GPT maintain the story's context, ensuring continuity between chapters without introducing conflicting elements.

When writing your chapter, it is important to remind GPT of the desired tone, the target audience, and how you want the story to be told. You might prefer more dialogue or perhaps more descriptions. This choice is up to you.

I’ve used a basic prompt that emphasizes important elements, but please note that this is just a simple example.

Here’s the prompt I used for the first chapter:

Write the first chapter of this short story intended for a 12-year-old audience.

  • Tone: Maintain a light-hearted, engaging, and adventurous tone. The story should be exciting and filled with wonder, suitable for young readers.
  • Language: Use simple and clear language. Avoid complex vocabulary and ensure that sentences are easy to follow, yet vivid enough to spark imagination.
  • Dialogue: Craft natural and relatable dialogue for pre-teens. Ensure conversations are lively and reflect the age and personality of the characters.
  • Pacing: Keep the chapter fast-paced and captivating to hold the reader's attention. Introduce key elements of the story quickly to hook the audience from the beginning.
  • Descriptions: Use vibrant and imaginative descriptions to paint a clear picture of the scenes and characters. Aim for language that is evocative but not overly detailed or intricate.
  • Length: Keep the chapter concise, focusing on introducing the main elements of the story without overloading the reader with too much information.

Extracting Interesting Scenes

From the chapter, we will identify the most interesting scenes to illustrate:

Based on this chapter, identify 3 interesting elements that would be compelling to illustrate. Provide each element as a short phrase, separated by semicolons. Do not add any additional comments.

Output:

Eryn and Frostbite navigating the icy forest; The scarlet dragon scale above the fireplace; The Crystal Caves glimmering in the distance.

Next, use the Data Splitter to treat each element individually.

Split the concepts

Creating Visual Prompts

Once the concepts are split, use the Merge Node to create an illustrated prompt based on the specific scene and the overall essence of the story. If your essence is good enough, it should include character descriptions, important places, concepts, and the desired art style. This helps to get consistent visual prompts.

Here we are using the "Merge + GPT" mode, so that the merge result is directly send as a prompt to GPT.

Example Prompt:

Based on this story description: ${input-2}

Create a visual prompt for DALL-E emphasizing this element for a given scene: ${input-1}

IMPORTANT: Respond with only the visual prompt. Do not add any other text, title, comments, or explanations.


Ensure GPT understands to focus on the current element to avoid depicting the entire story/chapter.

Repeat this process for each scene. You can duplicate your node.

Illustrate story element

Here are my results for "The Crystal Caves" and "The scarlet dragon scale above the fireplace". Note that GPT added the main characters in the first one, based on the essence.

Advanced Tips

Also, consider adding a negative prompt to tools like Stable Diffusion 3 to refine the results. For example, adding "realistic" as a negative prompt can steer the generation away from realism if that’s not desired.

When merging, make sure GPT prioritizes the current element over the entire story to maintain focus.

Conclusion

Creating a story is a complex project. Even with perfect prompts, proceed step by step to ensure smooth progress. This guide provides a logical flow for using AI-FLOW to aid in your story creation. In the next part, we will explore ways to create consistent visuals for our characters.

Start your journey with AI-FLOW now!

Overall flow

Stay tuned for the next part where we delve into character visual consistency.

· 4 min read
DahnM20

How to Automate Story Generation Using AI-FLOW - Part 1

This guide aims to provide insights into automating the generation of a full short story using AI. The objective is to generate a coherent and compelling story, complete with engaging visuals. The ultimate goal is to achieve this in one click after setting up the initial workflow.

To clarify, this guide is not intended to promote mass production of AI powered stories but rather to offer a method to help visualize and inspire you during your creative process.

Initializing the Story

Begin with a basic concept of the story you want to create:

  • Who is the main character?
  • Does the main character have a sidekick, pet, or companion?
  • Where does the story take place?
  • What are the key concepts or events in your story?
  • What is the art style?
  • Who is the target audience?

Adding your personal touch to the story is crucial. You can choose to generate these ideas with AI, but if your prompt is too simple, the result may be a generic story.

I will keep it simple for the example, but you may need something more elaborated, here's my prompt:


The story unfolds in a frozen country where our young hero, Eryn, a 16-year-old girl, is inspired by her late father's heroism. Eryn wields his cherished sword, dreaming of living up to his legacy. Her mission is crucial: to find a scarlet dragon’s scale that has sustained her family with warmth for the past two years. As Eryn embarks on her quest, she discovers a profound truth—that true heroism lies not in the legendary sword but in the bravery and heart of the one who wields it.

Art Style: The narrative is illustrated in a digital painting style, blending poetic elements suitable for children, creating a whimsical and inspiring journey.


Elaborating the Universe with AI

Using your inputs, ask the AI to connect all the elements and develop the universe and story into a simple summary. The goal is to capture the "essence" of the story.

Here’s a sample prompt you can use:

Based on these ideas, detail the story, characters, important locations, and the main quest.

Building the essence of the story

Structuring Your Story

Using the essence of your story, ask the AI to create a simple plan. For a short story, you might request three chapters. Each chapter should have a title and a brief summary.

Here's an example prompt:

Based on this description, create a plan for the book with three chapters. Provide a short summary for each chapter, ensuring that the story concludes at the end of Chapter 3.

Cover image flow

The first node here is just a Display Node used to show the essence of the story.

Creating the Cover for Your Story

Using the essence, create a visual prompt for the story's cover. Ask GPT to refine the essence into a visual prompt that considers the chosen art style. Then, use tools like Stable Diffusion 3 or DALL-E to generate the image. If the result isn't satisfactory, re-run the image generation. If necessary, regenerate the prompt and try again.

Here’s a sample prompt for DALL-E:

Based on this story, create a visual prompt for DALL-E representing an ideal cover for this story.

Cover image flow

Here is the resulting cover!

For this example, I used both DALL-E 3 and Stable Diffusion 3 to compare. DALL-E produced a cover with a strong art style and a solid title reminiscent of children’s stories. Stable Diffusion 3 created a more realistic, teenager-appropriate illustration. The outcome depends on how you instruct GPT to build your prompt. In a real scenario, you’ll need to tweak your prompt and regenerate the image multiple times to achieve convincing results.

N.B : DALL-E 3 improves each of your prompts in the background.

In the next article, we will explore how to build a chapter and create associated images!

You can try AI-FLOW now!

· 2 min read
DahnM20

Introducing Enhanced StabilityAI Integration in AI-FLOW

With the integration of StabilityAI's API into AI-FLOW, we've broadened our suite of features far beyond Stable Diffusion 3. This integration allows us to offer a versatile range of image processing capabilities, from background removal to creative upscaling, alongside search-and-replace functionalities.

Given the expansive set of tools and the ongoing advancements from StabilityAI, we've adopted a more flexible integration approach, akin to our implementation with the Replicate API. Our goal is to support automation and rapid adoption of new features released by StabilityAI.

StabilityAI feature showcase

Here's a rundown of the features now accessible through AI-FLOW, as per the StabilityAI documentation:

  • Control - Sketch: Guide image generation with sketches or line art.
  • Control - Structure: Precisely guide generation using an input image.
  • Edit - Outpaint: Expand an image in any direction by inserting additional content.
  • Edit - Remove Background: Focus on the foreground by removing the background.
  • Edit - Search and Replace: Automatically locate and replace objects in an image using simple text prompts.
  • Generate - Core: Create high-quality images quickly with advanced workflows.
  • Generate - SD3: Use the most robust version of Stable Diffusion 3 for your image generation needs.
  • Image to Video: Employ the state-of-the-art Stable Video Diffusion model to generate short videos.
  • Upscale - Creative: Elevate any low-resolution image to a 4K masterpiece with guided prompts.

These enhanced capabilities are great assets for your image processing workflow. Explore these features and find innovative ways to enhance your projects! Try it now!

· 2 min read
DahnM20

This article only concerns the hosted version of the application.

Inspired by valuable user feedback, our new Flexible Pricing strategy is now live, offering a more streamlined and cost-effective approach to using the platform. This update allows users to specify their API keys when logged in, providing greater flexibility and control over their usage and expenses.

Understanding the Hybrid System

Here’s how the new system works:

  • Credits by Default: Upon logging into the app, you can access everything by default if you have platform credits.
  • Add Your API Keys: You can add your API keys and use the related nodes for free, with a minor fee for resource usage.
  • Avoid Duplicated Services: This system prevents users from paying for services they already have, ensuring they only pay for the platform's resources.

While the hosted version remains practical for some, this system ensures the platform's long-term viability and provides a fair approach for all users.

Transition to the New System

Our goal is to make this flexible pricing system the standard. Starting June 24, it will replace the current "Use my own keys" option, simplifying the process for all users. We encourage you to explore this new feature and share your feedback with us.

Conclusion

The introduction of flexible pricing options marks a significant step forward in enhancing user experience on our platform. By integrating your API keys and benefiting from a resource-based fee structure, you can enjoy a more personalized and cost-effective service.

We are eager to hear your thoughts on this new strategy. Please feel free to share your feedback as we continue to improve and evolve our platform to better meet your needs.

· 2 min read
DahnM20

Introducing Claude 3 from Anthropic in AI-FLOW v0.7.0

Following user feedback, AI-FLOW has now integrated Claude from Anthropic, an upgrade in our text generation toolkit.

Example using Claude

Get Started

The Claude node is quite similar to the GPT one. You can add a textual prompt and additional context for Claude.

Example using Claude

The only difference is that the Claude node is a bit more customizable. You'll have access to:

  • temperature : Use a temperature closer to 0.0 for analytical/multiple-choice tasks and closer to 1.0 for creative and generative tasks.
  • max_tokens : The maximum number of tokens to generate before stopping.

Here are the 3 models descriptions according to Anthropic documentation :

  • Claude 3 Opus: Most powerful model, delivering state-of-the-art performance on highly complex tasks and demonstrating fluency and human-like understanding
  • Claude 3 Sonnet: Most balanced model between intelligence and speed, a great choice for enterprise workloads and scaled AI deployments
  • Claude 3 Haiku: Fastest and most compact model, designed for near-instant responsiveness and seamless AI experiences that mimic human interactions

Dive into a world of enhanced text creation with Claude from Anthropic on AI-FLOW. Experience the power of advanced AI-driven text generation. Try it now!

· 2 min read
DahnM20

Introducing Stable Diffusion 3 in AI-FLOW v0.6.4

AI-FLOW has now integrated Stable Diffusion 3, a significant upgrade in our image generation toolkit. This new version offers enhanced capabilities and adheres more closely to the prompts you input, creating images that truly reflect your creative intent. Additionally, it introduces the ability to better incorporate text directly within the generated images.

Visual Comparison: From Old to New

To illustrate the advancements, compare the outputs of the previous Stable Diffusion node and the new Stable Diffusion 3 node using the prompt:

The phrase 'Stable Diffusion' sculpted as a block of ice, floating in a serene body of water.

The difference in detail and fidelity is striking.

Example

Model Options: Standard and Turbo

Choose between the standard Stable Diffusion 3 and the Turbo version. Note that with the Turbo variant, the negative_prompt field is not utilized, which accelerates processing while maintaining high-quality image generation.

Enhance Your Creative Process

Experiment by combining outputs from Stable Diffusion 3 with other APIs, such as the instantmesh from Replicate API that generates a mesh from any given image input. This integration opens new possibilities for creators and developers.

Example

Looking Ahead

Expect more enhancements and support from StabilityAI in the coming weeks as we continue to improve AI-FLOW and expand its capabilities.

Get Started

Dive into a world of enhanced image creation with Stable Diffusion 3 on AI-FLOW. Experience the power of advanced AI-driven image generation. Try it now!