In the previous post, I broke down the basic concepts of the Griptape AI framework, and now it’s time to put them into practice. We’ll try to use them to develop a small application that helps run a link-blog on Telegram.

The application will receive a URL, download its content, run it through an LLM to generate a summary, translate that summary into a couple of other languages, combine everything, and publish it to Telegram via a bot. The general flow can be seen in the diagram below:

  flowchart LR
    A["URL"]
    A --> Parser["Parser"] --> LLM1["Summarizer"] 
    LLM1 --> Translator1["Translate to Language 1"]
    LLM1 --> Translator2["Translate to Language 2"]
    Translator1 --> Combiner["Combiner"]
    Translator2 --> Combiner
    LLM1 --> Combiner
    Combiner --> Telegram["Telegram Bot"]

To keep things simple, I’ll omit the implementation of the Telegram bot and also set aside my favorite Human-in-the-loop, which, in my opinion, must be present at least somewhere around the combiner 1.

In the process, we’ll try to figure out when it’s best to use different structures, as well as how composable and flexible the resulting graphs are for modification.

Alright, let’s get started.

Creating the Project

As mentioned in the previous post, Griptape is a Python framework, so we’ll use uv to start our project:

$ uv init .
$ uv add "griptape[all]>=1.7.2" python-dotenv

The [all] extra installs all available drivers and loaders, which creates an environment of about ~650MB. In a real application, it makes sense to limit yourself to only the extras you actually use, a list of which can be found in the project’s pyproject.toml. In general, they are quite granular, so in most cases, the actual size will be significantly smaller.

We’re also including python-dotenv because the drivers for various LLM providers use environment variables to manage API keys. In our example, we will use openrouter.ai2, which provides an OpenAI-like API for a huge number of models and providers.

Accordingly, let’s create a .env file and put our key in it:

OPENROUTER_API_KEY=sk-or-v1-...

So, first, the skeleton of our application:

# main.py
import argparse
import dotenv
import os

dotenv.load_dotenv()

def process_url(url: str):
    """Process the provided URL."""
    key = os.environ.get('OPENROUTER_API_KEY', '')
    if not key:
        raise ValueError("OPENROUTER_API_KEY is not set in the environment variables.")
    print(f"Processing URL: {url}")

def main():
    parser = argparse.ArgumentParser(description='Process URLs for link blog')
    parser.add_argument('url', type=str, help='URL to process')
    args = parser.parse_args()
    process_url(args.url)

if __name__ == "__main__":
    main()

Let’s check it:

$uv run ./main.py google.com
Processing URL: google.com

Excellent. We can now describe the graph. Reading the documentation answered one of the questions I raised in the last article:

Griptape provides three Structures:

Of the three, Workflow is generally the most versatile. Agent and Pipeline can be handy in certain scenarios but are less frequently needed if you’re comfortable just orchestrating Tasks directly.

Great, just as I suspected, the other primitives are better suited for very basic tasks, so we’ll just use Workflow everywhere and start building our graphs.

Loading Up on Websites

We’ll start by loading the website’s content. For this, Griptape provides the WebScraper driver with several different implementations, and the WebLoader loader. Our job is to wrap this in a Task:

from griptape.structures import Workflow
from griptape.tasks import CodeExecutionTask
from griptape.loaders import WebLoader

def load_page(task: CodeExecutionTask) -> str:
    """Load the content of the given URL."""
    print(f"Loading page: {task.input.value}")
    return WebLoader().load(task.input.value)


def print_result(task: CodeExecutionTask) -> None:
    """Print the result of the task."""
    for parent in task.parents:
        if parent.output.value:
            print(f"Output: {parent.output.value}")


def process_url(url: str):
    """Process the provided URL."""
    key = os.environ.get("OPENROUTER_API_KEY", "")
    if not key:
        raise ValueError("OPENROUTER_API_KEY is not set in the environment variables.")
    print(f"Processing URL: {url}")

    download_task = CodeExecutionTask(
        on_run=load_page, input=url, id="download_task", child_ids=["print_task"]
    )
    print_task = CodeExecutionTask(on_run=print_result, id="print_task")
    workflow = Workflow(tasks=[download_task, print_task])
    workflow.run()

What do we see here:

  1. A couple of CodeExecutionTasks. This is a special type of task that allows you to execute arbitrary code. Such a task takes a function as input, to which the task object itself is passed. This object comes with a huge number of fields that can be accessed and processed. In this case, we have defined two tasks, one of which loads the site’s content, and the other prints the output of its parent tasks.
Fields of the task
  1. To define the DAG structure itself, the child_ids or parent_ids parameters of the tasks are used. The Workflow itself simply accepts a list of these tasks.

  2. Workflow.run starts the tasks in the graph and returns the completed Workflow object, from which you can retrieve the completed tasks and their inputs/outputs. By the way, a Workflow can be run multiple times, and when using Conversation Memory, this operation is not idempotent.

If we use the StructureVisualizer, we’ll see a picture like this:

  graph TD;
        Download_Task--> Print_Task;
        Print_Task;

Summarizing

For summarization, Griptape has many ready-made primitives, including TextSummaryTask. It takes a Summary Engine as input, through which you can configure summarization parameters, such as the model driver, prompt templates, and a certain Chunker, whose purpose we will discuss a little later. Let’s try to use this task:

from griptape.tasks import TextSummaryTask
from griptape.engines import PromptSummaryEngine
from griptape.drivers.prompt.openai import OpenAiChatPromptDriver
...

def process_url(url: str):
	...
    download_task = CodeExecutionTask(
        on_run=load_page, input=url, id="download_task", child_ids=["summary_task"]
    )

    prompt_driver=OpenAiChatPromptDriver(
        model="google/gemini-2.5-flash-preview-05-20",
        base_url="https://openrouter.ai/api/v1",
        api_key=key,
    )
    
    summary_task = TextSummaryTask(
        "Please summarize the following content in a concise manner. {{ parents_output_text }}",
        summary_engine=PromptSummaryEngine(
            prompt_driver=prompt_driver
        ),
        id="summary_task",
        child_ids=["print_task"],
    )

    print_task = CodeExecutionTask(on_run=print_result, id="print_task")

    workflow = Workflow(tasks=[download_task, summary_task, print_task])

    workflow.run()

This code gives us the following output:

$ uv run ./main.py https://docs.griptape.ai/stable/griptape-framework/structures/agents/
Output: Griptape Agents are a quick way to start using the platform. They take tools and input directly, which the agent uses to add a Prompt Task. The final output of the Agent can be accessed using the output attribute. An example demonstrates an Agent using a CalculatorTool to compute 13^7, successfully returning the result 62,748,517.

Here I came across an interesting feature: for the summary_task to receive the output of download_task as its input, you must specify an instruction using the {parents_output_text} template as the input. Although the summarizer already has the necessary prompt inside, it does not extract the data from the output of the parent tasks on its own. We have to take care of that ourselves. Otherwise, everything seems quite straightforward.

  graph TD;
        Download_Task--> Summary_Task;
        Summary_Task--> Print_Task;
        Print_Task

Chunker

While summarization worked great on small pages, trying to run it on a 55,000-word page caused the task to hang, regardless of the model used. Explicitly specifying a Chunker solved the problem:

from griptape.chunkers import TextChunker
...

    summary_task = TextSummaryTask(
        "Please summarize the following content in a concise manner. {{ parents_output_text }}",
        summary_engine=PromptSummaryEngine(
            prompt_driver=prompt_driver,
            chunker = TextChunker(max_tokens=16000)
        ),
        id="summary_task",
        child_ids=["print_task", "russian_translate_task", "polish_translate_task"],
    )

Based on the logs, it seems the chunking in the summarizer is implemented iteratively, and in pseudocode, it can be expressed as follows:

chunks = [chunk1, chunk2, ..., chunkN]
summary = summarize(f"Summarize this: {chunk1}")
for chunk in chunks[1:]:
    summary = summarize(f"Update this summary {summary} with the following additional information: {chunk}")
return summary

This approach causes information located closer to the end of the text to outweigh the information at the beginning, which is not always acceptable. Personally, I would prefer to summarize chunks via Map-Reduce:

chunks = [chunk1, chunk2, ..., chunkN]
summaries = [summarize(f"Summarize this: {chunk}") for chunk in chunks]
summary = summarize(f"Summarize these chunk summaries: {'\n---\n'.join(summaries)}")
return summary

With this approach, all chunks are treated equally. It also allows for parallel summarization, although it requires one more request to the model.

However, this is where the flexibility of the framework and its engines shines. Thanks to them, this algorithm can be implemented in your own custom engine, inherited from BaseSummaryEngine, and used almost seamlessly.

Better Together

The next step is to translate this text into several languages. Of course, these tasks can be parallelized, and our Workflow provides excellent tools for this:

from griptape.tasks import PromptTask
...
def process_url(url: str):
    """Process the provided URL."""
    
    ...

    summary_task = TextSummaryTask(
        ...
        
        id="summary_task",
        child_ids=["print_task", "russian_translate_task", "polish_translate_task"],
    )

    russian_translate_task = PromptTask(
        "Please translate the following text to Russian: {{ parents_output_text }}",
        prompt_driver=prompt_driver,
        id="russian_translate_task",
        child_ids=["print_task"],
    )

    polish_translate_task = PromptTask(
        "Please translate the following text to Polish: {{ parents_output_text }}",
        prompt_driver=prompt_driver,
        id="polish_translate_task",
        child_ids=["print_task"],
    )

    print_task = CodeExecutionTask(on_run=print_result, id="print_task")

    workflow = Workflow(
        tasks=[
            download_task,
            summary_task,
            russian_translate_task,
            polish_translate_task,
            print_task,
        ]
    )

That was surprisingly simple. The PromptTask is the most basic primitive used for direct requests to an LLM. The only interesting thing here is that we specified several children for summary_task, which allows multiple tasks to run in parallel.

But how can we verify that the tasks are actually running in parallel? The developers have thought of this too, providing support for hooks in the API. Specifically, the constructor of any task has on_before_run and on_after_run parameters, allowing you to add arbitrary pre- and post-processing. Let’s use them:

from griptape.tasks import BaseTask
from datetime import datetime
...

def timestamp(task: BaseTask, action: str):
    print(f"task {task.id} {action} at {datetime.now().isoformat()}")

def process_url(url: str):
    ...
    polish_translate_task = PromptTask(
        "Please translate the following text to Polish: {{ parents_output_text }}",
        prompt_driver=prompt_driver,
        on_before_run=lambda task: timestamp(task, "started"),
        on_after_run=lambda task: timestamp(task, "finished"),
        id="polish_translate_task",
        child_ids=["print_task"],
    )    
    
    # And do the same for the other tasks

We get the following result, which fully meets our expectations:

$ uv run ./main.py https://docs.griptape.ai/stable/griptape-framework/structures/agents/
task summary_task started at 2025-06-05T22:03:02.272397
task summary_task finished at 2025-06-05T22:03:04.345067
task russian_translate_task started at 2025-06-05T22:03:04.347638
task polish_translate_task started at 2025-06-05T22:03:04.351531
task polish_translate_task finished at 2025-06-05T22:03:05.702819
task russian_translate_task finished at 2025-06-05T22:03:05.947364
Output: Agents in Griptape offer a quick start, directly processing tools and input to generate a Prompt Task. The final output is accessible via the `output` attribute. An example demonstrates an Agent using a `CalculatorTool` to compute 13^7, showing the input, tool action, and the resulting output.
Output: Вот перевод текста на русский язык:

Агенты в Griptape предлагают быстрый старт, напрямую обрабатывая инструменты и входные данные для генерации задачи (Prompt Task). Конечный результат доступен через атрибут `output`. Пример демонстрирует Агента, использующего `CalculatorTool` для вычисления 13^7, показывая входные данные, действие инструмента и полученный результат.
Output: Oto tłumaczenie tekstu na język polski:

Agenci w Griptape oferują szybki start, bezpośrednio przetwarzając narzędzia i dane wejściowe w celu wygenerowania Zadania Monitu (Prompt Task). Ostateczny wynik jest dostępny poprzez atrybut `output`. Przykład demonstruje Agenta używającego `CalculatorTool` do obliczenia 13^7, pokazując dane wejściowe, działanie narzędzia i wynik końcowy.

The logs clearly show that the Russian and Polish translations are running in parallel.

And for good measure, here’s the current graph:

  graph TD;
    Download_Task--> Summary_Task;
    Summary_Task--> Print_Task & Russian_Translate_Task & Polish_Translate_Task;
    Russian_Translate_Task--> Print_Task;
    Polish_Translate_Task--> Print_Task;
    Print_Task;

Finishing Up

The rest is essentially a matter of technique, so I’ll just provide the complete code for the program below:

import argparse
import dotenv
import os

from griptape.structures import Workflow
from griptape.tasks import CodeExecutionTask
from griptape.loaders import WebLoader
from griptape.utils import StructureVisualizer
from griptape.tasks import TextSummaryTask
from griptape.engines import PromptSummaryEngine
from griptape.drivers.prompt.openai import OpenAiChatPromptDriver
from griptape.tasks import PromptTask, BaseTask
from griptape.chunkers import TextChunker
from griptape.artifacts import TextArtifact
from datetime import datetime

import logging
logging.getLogger("griptape").setLevel(logging.WARNING)

dotenv.load_dotenv()


def load_page(task: CodeExecutionTask) -> str:
    """Load the content of the given URL."""
    return WebLoader().load(task.input.value)


def combine_result(task: CodeExecutionTask) -> TextArtifact:
    """Combine results from parent tasks."""
    result = ""
    for parent in task.parents:
        if parent.output.value:
            result += f"{parent.output.value}\n\n"
    return TextArtifact(result)

def send_to_telegram(task: CodeExecutionTask) -> None:
    """Send the result to Telegram."""
    # Placeholder for sending to Telegram logic
    print(f"Sending to Telegram: {task.parents[0].output.value}")

def timestamp(task: BaseTask, action: str):
    print(f"task {task.id} {action} at {datetime.now().isoformat()}")


def process_url(url: str):
    """Process the provided URL."""
    key = os.environ.get("OPENROUTER_API_KEY", "")
    if not key:
        raise ValueError("OPENROUTER_API_KEY is not set in the environment variables.")

    prompt_driver=OpenAiChatPromptDriver(
        model="google/gemini-2.5-flash-preview-05-20",
        base_url="https://openrouter.ai/api/v1",
        api_key=key,
    )

    download_task = CodeExecutionTask(
        on_run=load_page, input=url, id="download_task", child_ids=["summary_task"]
    )

    summary_task = TextSummaryTask(
        "Please summarize the following content in a concise manner. {{ parents_output_text }}",
        summary_engine=PromptSummaryEngine(
            prompt_driver=prompt_driver,
            chunker = TextChunker(max_tokens=16000)
        ),
        on_before_run=lambda task: timestamp(task, "started"),
        on_after_run=lambda task: timestamp(task, "finished"),
        id="summary_task",
        child_ids=["combine_task", "russian_translate_task", "polish_translate_task"],
    )

    russian_translate_task = PromptTask(
        "Please translate the following text to Russian: {{ parents_output_text }}",
        prompt_driver=prompt_driver,
        on_before_run=lambda task: timestamp(task, "started"),
        on_after_run=lambda task: timestamp(task, "finished"),
        id="russian_translate_task",
        child_ids=["combine_task"],
    )

    polish_translate_task = PromptTask(
        "Please translate the following text to Polish: {{ parents_output_text }}",
        prompt_driver=prompt_driver,
        on_before_run=lambda task: timestamp(task, "started"),
        on_after_run=lambda task: timestamp(task, "finished"),
        id="polish_translate_task",
        child_ids=["combine_task"],
    )

    combine_task = CodeExecutionTask(on_run=combine_result, id="combine_task", child_ids=["send_task"])
    send_task = CodeExecutionTask(
        on_run=send_to_telegram, id="send_task",
    )


    workflow = Workflow(
        tasks=[
            download_task,
            summary_task,
            russian_translate_task,
            polish_translate_task,
            combine_task,
            send_task
        ]
    )

    workflow.run()

    print(StructureVisualizer(workflow).to_url())
    print("Workflow completed successfully.")


def main():
    parser = argparse.ArgumentParser(description="Process URLs for link blog")
    parser.add_argument("url", type=str, help="URL to process")
    args = parser.parse_args()
    process_url(args.url)


if __name__ == "__main__":
    main()

And the graph:

  graph TD;
        Download_Task--> Summary_Task;
        Summary_Task--> Combine_Task & Russian_Translate_Task & Polish_Translate_Task;
        Russian_Translate_Task--> Combine_Task;
        Polish_Translate_Task--> Combine_Task;
        Combine_Task--> Send_Task;
        Send_Task;

I find the code to be quite simple, easy to read, and flexible enough for modification and reuse. Obviously, in real-world applications, all this simplicity will be diluted with error handling, logging, and so on. But that’s a topic for a separate discussion.

Additionally, the Workflow API provides two more styles for task composition that don’t require specifying parents and children during task creation:

  • Imperative, where we can use add_parent and add_child functions.
  • The so-called bit-shift, where parents and children can be linked like this: task1 >> task2 >> [task3, task4]

Both of these styles make it easy to assemble different graphs from ready-made primitives, although bit-shift feels more like a separate DSL than standard Python.

We’re Done

In conclusion, I’d like to say that I’m enjoying the framework at this stage. It is quite logical, flexible, and pleasant to use, although it is not without some rough edges.

I also found the documentation to be quite good, although it lacks a description of exactly how some of the engines work. For now, to get this information, you have to dig through the logs or the code.

In this post we’ve figured out the basic functionality. Next time, we’ll look at what primitives the framework provides for building RAGs.


  1. we want to make sure we’re not painfully embarrassed by what we’ve published, right? ↩︎

  2. which I highly respect and hope to write a separate post about someday ↩︎