%pip install tablegpt-agent
This package depends on pybox to manage code execution environment. By default, pybox operates in an in-cluster mode. If you intend to run tablegpt-agent in a local environment, install the optional dependency as follows:
%pip install tablegpt-agent[local]
Setup the LLM Service¶
Before using TableGPT Agent, ensure you have an OpenAI-compatible server configured to host TableGPT2. We recommend using vllm for this:
python -m vllm.entrypoints.openai.api_server --served-model-name TableGPT2-7B --model path/to/weights
Notes:
- To analyze tabular data with
tablegpt-agent
, make sureTableGPT2
is served withvllm
version 0.5.5 or higher.- For production environments, it's important to optimize the vllm server configuration. For details, refer to the vllm documentation on server configuration.
Chat with TableGPT Agent¶
To create an agent, you'll need at least an LLM
instance and a PyBoxManager
:
NOTE 1: This tutorial uses
langchain-openai
for the llm instance. Please install it first.
pip install langchain-openai
NOTE 2: TableGPT Agent fully supports aync invocation. To start a Python console that supports asynchronous operations, run the following command:
python -m asyncio
In the console or notebook, set the proxy as follows:
from langchain_openai import ChatOpenAI
from pybox import LocalPyBoxManager
from tablegpt.agent import create_tablegpt_graph
from tablegpt import DEFAULT_TABLEGPT_IPYKERNEL_PROFILE_DIR
llm = ChatOpenAI(openai_api_base="YOUR_VLLM_URL", openai_api_key="whatever", model_name="TableGPT2-7B")
pybox_manager = LocalPyBoxManager(profile_dir=DEFAULT_TABLEGPT_IPYKERNEL_PROFILE_DIR)
agent = create_tablegpt_graph(
llm=llm,
pybox_manager=pybox_manager,
)
To interact with the agent:
from datetime import date
from langchain_core.messages import HumanMessage
message = HumanMessage(content="Hi")
_input = {
"messages": [message],
"parent_id": "some-parent-id",
"date": date.today(),
}
response = await agent.ainvoke(_input)
response["messages"]
[HumanMessage(content='Hi', additional_kwargs={}, response_metadata={}, id='34fe748c-81ab-49ea-bec6-9c621598a61a'), AIMessage(content="Hello! How can I assist you with data analysis today? Please let me know the details of the dataset you're working with and what specific analysis you'd like to perform.", additional_kwargs={'parent_id': 'some-parent-id'}, response_metadata={}, id='a1ee29d2-723e-41c7-b420-27d0cfaed5dc')]
You can get more detailed outputs with the astream_events
method:
async for event in agent.astream_events(
input=_input,
version="v2",
):
# We ignore irrelevant events here.
if event["event"] == "on_chat_model_end":
print(event["data"]["output"])
content='Hello! How can I assist you with your data analysis today? Please let me know what dataset you are working with and what specific analyses or visualizations you would like to perform.' additional_kwargs={} response_metadata={'finish_reason': 'stop', 'model_name': 'TableGPT2-7B'} id='run-525eb149-0e3f-4b04-868b-708295f789ac'