-
Notifications
You must be signed in to change notification settings - Fork 751
Closed
Copy link
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
Hi, thanks a lot for creating this fantastic project — Marimo is really impressive and exciting to use!
I noticed that Marimo currently uses Python-Markdown to render markdown. However, the parsing rules seem to be a bit too strict and not always compatible with markdown generated by LLMs.
For example:
- A blank line is required before starting a list.
- Multi-level lists must use a four-space indent to render correctly.
This makes it harder to directly render markdown from LLM outputs, which usually follow more permissive conventions.
Would it be possible to consider adopting a more relaxed and widely supported standard, such as CommonMark or GitHub Flavored Markdown (GFM), for rendering? This could make LLM-generated markdown more compatible out of the box.
Thanks again for the amazing work!
Will you submit a PR?
- Yes
Environment
{
"marimo": "0.16.1",
"editable": false,
"location": "/home/akari/anaconda3/envs/agent/lib/python3.12/site-packages/marimo",
"OS": "Linux",
"OS Version": "6.16.7-200.fc42.x86_64",
"Processor": "",
"Python Version": "3.12.11",
"Locale": "en_US",
"Binaries": {
"Browser": "140.0.7339.185",
"Node": "v22.19.0"
},
"Dependencies": {
"click": "8.3.0",
"docutils": "0.22.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.9",
"narwhals": "2.5.0",
"packaging": "25.0",
"psutil": "7.1.0",
"pygments": "2.19.2",
"pymdown-extensions": "10.16.1",
"pyyaml": "6.0.2",
"starlette": "0.48.0",
"tomlkit": "0.13.3",
"typing-extensions": "4.15.0",
"uvicorn": "0.35.0",
"websockets": "15.0.1"
},
"Optional Dependencies": {
"nbformat": "5.10.4",
"openai": "1.108.1",
"loro": "1.8.1",
"python-lsp-server": "1.13.1",
"ruff": "0.13.1"
},
"Experimental Flags": {}
}Code to reproduce
import marimo
__generated_with = "0.16.1"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv(usecwd=True))
llm = ChatOpenAI(model="gpt-4.1")
response = llm.invoke(
[
("system", "You are a helpful assistant"),
("human", "Use a brief paragraph and a multilevel unordered list to show the advantages of marimo over Jupyter."),
]
)
return mo, response
@app.cell
def _(mo, response):
mo.md(response.content)
return
@app.cell
def _(response):
print(response.content)
return
if __name__ == "__main__":
app.run()Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working