You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to plot my data using Pygwalker, the data is a csv file about 467MB with shape (3682080, 12), my code is like:
frompygwalker.api.streamlitimportStreamlitRendererimportpandasaspdimportstreamlitasst# Adjust the width of the Streamlit pagest.set_page_config(
page_title="Use Pygwalker In Streamlit",
layout="wide"
)
# Add Titlest.title("Use Pygwalker In Streamlit")
# You should cache your pygwalker renderer, if you don't want your memory to explode@st.cache_resourcedefget_pyg_renderer() ->"StreamlitRenderer":
df=pd.read_csv("/data.csv")
# If you want to use feature of saving chart config, set `spec_io_mode="rw"`returnStreamlitRenderer(df, kernel_computation=True)
renderer=get_pyg_renderer()
renderer.explorer()
I tried to use pygwalker inside jupyter and via streamlit, both gave me the error "The query returned too many data entries, making it difficult for the frontend to render. Please adjust your chart configuration and try again."
Screenshot:
The visualization is stuck at loading, and got a timeout message afterwards. Is there any workaround to render my data? What chart configuration should I adjust?
The text was updated successfully, but these errors were encountered:
Thank you for bringing up this issue with pygwalker. By default, pygwalker has a fixed limitation on data queries to ensure the safety of memory usage in the frontend browser.
When the count(distinct t) exceeds 1,000,000 (1 million), it becomes challenging for the frontend to efficiently render such a large amount of data into a chart.
To address this issue, we are considering adding a new parameter that allows users to control the maximum data size for rendering. This parameter will provide flexibility and allow users to adjust the size according to their specific needs.
One possible solution is to introduce the following code snippet, which sets the maximum data length to 10,000,000 (10 million):
I was trying to plot my data using Pygwalker, the data is a csv file about 467MB with shape (3682080, 12), my code is like:
I tried to use pygwalker inside jupyter and via streamlit, both gave me the error "The query returned too many data entries, making it difficult for the frontend to render. Please adjust your chart configuration and try again."
Screenshot:
The visualization is stuck at loading, and got a timeout message afterwards. Is there any workaround to render my data? What chart configuration should I adjust?
The text was updated successfully, but these errors were encountered: