This is a continuation of a question I asked previously in How to efficiently select the top N columns by grouping for each row of a pandas DataFrame?. My needs have evolved as I work with my dataset — thanks again to everyone who has helped me so far. Here is the problem as it currently stands:
Let’s say that I have a pandas DataFrame representing the scores of each “contestant” of a hypothetical contest organized by date. Note that there may be sporadic NaN values interspersed:
import numpy as np
import pandas as pd
rng = np.random.default_rng()
dates = pd.date_range('2024-08-01', '2024-08-07')
contestants = ['Alligator', 'Beryl', 'Chupacabra', 'Dandelion', 'Eggplant', 'Feldspar']
scores = rng.random(len(dates) * len(contestants))
scores[rng.integers(len(scores), size=10)] = np.nan
scores = scores.reshape((len(dates), len(contestants)))
scores = pd.DataFrame(scores, dates, contestants)
scores.index.name = 'DATE'
scores.columns.name = 'CONTESTANT'
CONTESTANT Alligator Beryl Chupacabra Dandelion Eggplant Feldspar
DATE
2024-08-01 0.425859 0.869790 0.025546 0.249784 0.164426 0.292931
2024-08-02 0.545743 0.245658 0.384288 0.148041 0.759137 NaN
2024-08-03 0.558930 0.773545 0.215342 0.644964 0.204309 NaN
2024-08-04 0.448075 NaN NaN 0.795700 0.744143 0.807003
2024-08-05 0.858097 0.349170 0.339740 0.445206 NaN 0.118371
2024-08-06 NaN 0.847647 0.086368 0.806557 NaN NaN
2024-08-07 0.167334 0.063111 0.152129 0.823477 0.613271 0.709280
In addition, each contestant is mapped to a particular category:
category_mapping = {
'Alligator': 'Animal',
'Beryl': 'Mineral',
'Chupacabra': 'Animal',
'Dandelion': 'Vegetable',
'Eggplant': 'Vegetable',
'Feldspar': 'Mineral'
}
Given this setup, how do I retain the best score from each category in each row, zeroing out or setting to NaN any scores that do not make the cut? For example, the results should look something like this:
CONTESTANT Alligator Beryl Chupacabra Dandelion Eggplant Feldspar
DATE
2024-08-01 0.425859 0.869790 0.000000 0.249784 0.000000 0.000000
2024-08-02 0.545743 0.245658 0.000000 0.000000 0.759137 NaN
2024-08-03 0.558930 0.773545 0.000000 0.644964 0.000000 NaN
2024-08-04 0.448075 NaN NaN 0.795700 0.000000 0.807003
2024-08-05 0.858097 0.349170 0.000000 0.445206 NaN 0.000000
2024-08-06 NaN 0.847647 0.086368 0.806557 NaN NaN
2024-08-07 0.167334 0.000000 0.000000 0.823477 0.000000 0.709280
In addition, how do I go about making this fast? My actual application is a Monte Carlo simulation operating on a DataFrame with about 250 rows, 15000 columns, and 175 categories, so efficiency is key here. I’ve gotten this to mostly work through a combination of answers from @mozway and @rezan21 (transpose-groupby-idxmax-transpose) but I suspect that my approach is suboptimal and could be much better. Thanks for your help!
1
Rather similar to the accepted answer by @mozway to your linked post:
- Map the column labels to the categories (
cat
). - Use
df.T
and applydf.groupby
withgroupby.transform
+max
, and transpose again (max_transform
). - Use
df.where
and check for equality betweenscores
andmax_transform
. - Add the alternative condition
scores.isna()
if you want to preservenp.nan
values.
cat = scores.columns.map(category_mapping)
max_transform = scores.T.groupby(cat).transform('max').T
out = scores.where(scores == max_transform, 0)
Output:
CONTESTANT Alligator Beryl Chupacabra Dandelion Eggplant Feldspar
DATE
2024-08-01 0.425859 0.869790 0.000000 0.249784 0.000000 0.000000
2024-08-02 0.545743 0.245658 0.000000 0.000000 0.759137 0.000000
2024-08-03 0.558930 0.773545 0.000000 0.644964 0.000000 0.000000
2024-08-04 0.448075 0.000000 0.000000 0.795700 0.000000 0.807003
2024-08-05 0.858097 0.349170 0.000000 0.445206 0.000000 0.000000
2024-08-06 0.000000 0.847647 0.086368 0.806557 0.000000 0.000000
2024-08-07 0.167334 0.000000 0.000000 0.823477 0.000000 0.709280
# preserve `NaN` values
scores.where((scores == max_transform) | (scores.isna()), 0)
CONTESTANT Alligator Beryl Chupacabra Dandelion Eggplant Feldspar
DATE
2024-08-01 0.425859 0.869790 0.000000 0.249784 0.000000 0.000000
2024-08-02 0.545743 0.245658 0.000000 0.000000 0.759137 NaN
2024-08-03 0.558930 0.773545 0.000000 0.644964 0.000000 NaN
2024-08-04 0.448075 NaN NaN 0.795700 0.000000 0.807003
2024-08-05 0.858097 0.349170 0.000000 0.445206 NaN 0.000000
2024-08-06 NaN 0.847647 0.086368 0.806557 NaN NaN
2024-08-07 0.167334 0.000000 0.000000 0.823477 0.000000 0.709280
Data sample
import pandas as pd
import numpy as np
data = {'index': ['2024-08-01', '2024-08-02', '2024-08-03', '2024-08-04',
'2024-08-05', '2024-08-06', '2024-08-07'],
'columns': ['Alligator', 'Beryl', 'Chupacabra', 'Dandelion',
'Eggplant', 'Feldspar'],
'data': [[0.425859, 0.86979, 0.025546, 0.249784, 0.164426, 0.292931],
[0.545743, 0.245658, 0.384288, 0.148041, 0.759137, np.nan],
[0.55893, 0.773545, 0.215342, 0.644964, 0.204309, np.nan],
[0.448075, np.nan, np.nan, 0.7957, 0.744143, 0.807003],
[0.858097, 0.34917, 0.33974, 0.445206, np.nan, 0.118371],
[np.nan, 0.847647, 0.086368, 0.806557, np.nan, np.nan],
[0.167334, 0.063111, 0.152129, 0.823477, 0.613271, 0.70928]],
'index_names': ['DATE'],
'column_names': ['CONTESTANT']
}
scores = pd.DataFrame.from_dict(data, orient='tight')