DuckDB Python Relational API equivalent of `select sum(a) filter (where b>1)`
If I have
“n_unique” aggregation using DuckDB relational API which counts nulls
Similar to “n_unique” aggregation using DuckDB relational API
“n_unique” aggregation using DuckDB relational API
Say I have
“n_unique” aggregation using DuckDB relational API
Say I have
“n_unique” aggregation using DuckDB relational API
Say I have
Minimum periods in rolling mean
Say I have:
Cannot alter DuckDB table due to conversion error although the rows with issues were already deleted
using Python 3.10 I’m trying to read some data into a DuckDB (v1.0.0) table, delete some rows and then cast columns to a different data type. The problem is that the rows that were deleted contained values that cannot be cast to the new type, however as these rows were deleted I do not expect this to be a problem.
How to force DuckDB to reconnect to a file and prevent it from remembering contents of a deleted database file?
Motivation I’m working on a larger dashboard application which uses duckdb. I found out a bug which in a nutshell is like this: # create new connection con = duckdb.connect(‘/somedir/duckdb.db’) # do something # remove or replace the file Path(‘/somedir/duckdb.db’).unlink() shutil.copy(‘/otherdir/duckdb.db’, ‘/somedir/duckdb.db’) # create new connection to the *new* file with same location con = […]
How to write CSV data directly from string (or bytes) to a duckdb database file in Python?
I would like to write CSV data directly from a bytes (or string) object in memory to duckdb database file (i.e. I want to avoid having to write and read the temporary .csv files). This is what I’ve got so far:
Getting an error while foing DATE_ADD in DuckDB
DuckDB version: v0.10.2,
Python version: 3.11.4