DataFrame QA: A Universal LLM Framework on DataFrame Question Answering Without Data Exposure

Document Type

Conference Proceeding

Publication Date

1-1-2024

Abstract

This paper introduces DataFrame question answering (QA), a novel task that utilizes natural language processing (NLP) models to generate Pandas queries for information retrieval and data analysis on dataframes, emphasizing safe and non-revealing data handling. Specifically, our method, leveraging large language model (LLM), which solely relies on dataframe column names, not only ensures data privacy but also significantly reduces the context window in the prompt, streamlining information processing and addressing major challenges in LLM-based data analysis. We propose DataFrame QA as a comprehensive framework that includes safe Pandas query generation and code execution. Various LLMs are evaluated on the renowned WikiSQL dataset and our newly developed UCI-DataFrameQA, tailored for complex data analysis queries. Our findings indicate that GPT-4 performs well on both datasets, underscoring its capability in securely retrieving and aggregating dataframe values and conducting sophisticated data analyses. This approach, deployable in a zero-shot manner without prior training or adjustments, proves to be highly adaptable and secure for diverse applications. Our code and dataset are available at https://github.com/JunyiYe/dataframe-qa.

Identifier

85219506783 (Scopus)

Publication Title

Proceedings of Machine Learning Research

e-ISSN

26403498

First Page

575

Last Page

590

Volume

260

Grant

2310261

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS