팬더 DataFrame 형식이 있습니다.
id start_time sequence_no value
0 71 2018-10-17 20:12:43+00:00 114428 3
1 71 2018-10-17 20:12:43+00:00 114429 3
2 71 2018-10-17 20:12:43+00:00 114431 79
3 71 2019-11-06 00:51:14+00:00 216009 100
4 71 2019-11-06 00:51:14+00:00 216011 150
5 71 2019-11-06 00:51:14+00:00 216013 180
6 92 2019-12-01 00:51:14+00:00 114430 19
7 92 2019-12-01 00:51:14+00:00 114433 79
8 92 2019-12-01 00:51:14+00:00 114434 100
난 할 노력하고있어 누락에 채우기입니다 sequence_no
당 id
/ start_time
콤보. 예를 들어 and 의 id
/ start_time
쌍은 sequence_no 114430이 없습니다. 추가 된 누락 된 sequence_no마다 누락 된 열 값 의 평균 / 보간이 필요 합니다. 따라서 위의 데이터의 최종 처리는 다음과 같습니다.71
2018-10-17 20:12:43+00:00
value
id start_time sequence_no value
0 71 2018-10-17 20:12:43+00:00 114428 3
1 71 2018-10-17 20:12:43+00:00 114429 3
2 71 2018-10-17 20:12:43+00:00 114430 41 **
3 71 2018-10-17 20:12:43+00:00 114431 79
4 71 2019-11-06 00:51:14+00:00 216009 100
5 71 2019-11-06 00:51:14+00:00 216010 125 **
6 71 2019-11-06 00:51:14+00:00 216011 150
7 71 2019-11-06 00:51:14+00:00 216012 165 **
8 71 2019-11-06 00:51:14+00:00 216013 180
9 92 2019-12-01 00:51:14+00:00 114430 19
10 92 2019-12-01 00:51:14+00:00 114431 39 **
11 92 2019-12-01 00:51:14+00:00 114432 59 **
12 92 2019-12-01 00:51:14+00:00 114433 79
13 92 2019-12-01 00:51:14+00:00 114434 100
( **
가독성을 높이기 위해 새로 삽입 된 행의 오른쪽에 추가됨)
이 작업을 수행하는 원래의 솔루션은 많은 양의 데이터 테이블에서 Python 루프에 크게 의존하므로 numpy 및 pandas가 빛나는 이상적인 장소처럼 보였습니다. Pandas 와 같은 SO 답변에 기대어 : 숫자 간격을 채우기 위해 행을 작성하면 다음 과 같은 결과 가 나타납니다.
import pandas as pd
import numpy as np
# Generate dummy data
df = pd.DataFrame([
(71, '2018-10-17 20:12:43+00:00', 114428, 3),
(71, '2018-10-17 20:12:43+00:00', 114429, 3),
(71, '2018-10-17 20:12:43+00:00', 114431, 79),
(71, '2019-11-06 00:51:14+00:00', 216009, 100),
(71, '2019-11-06 00:51:14+00:00', 216011, 150),
(71, '2019-11-06 00:51:14+00:00', 216013, 180),
(92, '2019-12-01 00:51:14+00:00', 114430, 19),
(92, '2019-12-01 00:51:14+00:00', 114433, 79),
(92, '2019-12-01 00:51:14+00:00', 114434, 100),
], columns=['id', 'start_time', 'sequence_no', 'value'])
# create a new DataFrame with the min/max `sequence_no` values for each `id`/`start_time` pairing
by_start = df.groupby(['start_time', 'id'])
ranges = by_start.agg(
sequence_min=('sequence_no', np.min), sequence_max=('sequence_no', np.max)
)
reset = ranges.reset_index()
mins = reset['sequence_min']
maxes = reset['sequence_max']
# Use those min/max values to generate a sequence with ALL values in that range
expanded = pd.DataFrame(dict(
start_time=reset['start_time'].repeat(maxes - mins + 1),
id=reset['id'].repeat(maxes - mins + 1),
sequence_no=np.concatenate([np.arange(mins, maxes + 1) for mins, maxes in zip(mins, maxes)])
))
# Use the above generated DataFrame as an index to generate the missing rows, then interpolate
expanded_index = pd.MultiIndex.from_frame(expanded)
df.set_index(
['start_time', 'id', 'sequence_no']
).reindex(expanded_index).interpolate()
출력은 정확하지만 파이썬 루프 솔루션과 거의 동일한 속도로 실행됩니다. 몇 단계를 밟을 수있는 장소가 있다고 확신하지만 테스트에서 가장 느린 부분은 reindex
입니다. 실제 데이터는 거의 백만 행 (자주 운영되는)으로 구성되어 있으므로 이미 작성한 것에 비해 성능 이점을 얻을 수있는 확실한 방법이 있습니까? 이 변환 속도를 높일 수있는 방법이 있습니까?
2019 년 12 월 12 일 업데이트
이 답변 에서 병합 솔루션 결합 과 확장 된 데이터 프레임의 원래 구성을 충분히 큰 데이터 세트에서 테스트 할 때 지금까지 가장 빠른 결과를 얻을 수 있습니다.
import pandas as pd
import numpy as np
# Generate dummy data
df = pd.DataFrame([
(71, '2018-10-17 20:12:43+00:00', 114428, 3),
(71, '2018-10-17 20:12:43+00:00', 114429, 3),
(71, '2018-10-17 20:12:43+00:00', 114431, 79),
(71, '2019-11-06 00:51:14+00:00', 216009, 100),
(71, '2019-11-06 00:51:14+00:00', 216011, 150),
(71, '2019-11-06 00:51:14+00:00', 216013, 180),
(92, '2019-12-01 00:51:14+00:00', 114430, 19),
(92, '2019-12-01 00:51:14+00:00', 114433, 79),
(92, '2019-12-01 00:51:14+00:00', 114434, 100),
], columns=['id', 'start_time', 'sequence_no', 'value'])
# create a ranges df with groupby and agg
ranges = df.groupby(['start_time', 'id'])['sequence_no'].agg([
('sequence_min', np.min), ('sequence_max', np.max)
])
reset = ranges.reset_index()
mins = reset['sequence_min']
maxes = reset['sequence_max']
# Use those min/max values to generate a sequence with ALL values in that range
expanded = pd.DataFrame(dict(
start_time=reset['start_time'].repeat(maxes - mins + 1),
id=reset['id'].repeat(maxes - mins + 1),
sequence_no=np.concatenate([np.arange(mins, maxes + 1) for mins, maxes in zip(mins, maxes)])
))
# merge expanded and df
merge = expanded.merge(df, on=['start_time', 'id', 'sequence_no'], how='left')
# interpolate and assign values
merge['value'] = merge['value'].interpolate()
답변
merge
대신에 사용하면 reindex
속도가 빨라질 수 있습니다. 또한 목록 이해 대신 맵을 사용하는 것도 가능합니다.
# Generate dummy data
df = pd.DataFrame([
(71, '2018-10-17 20:12:43+00:00', 114428, 3),
(71, '2018-10-17 20:12:43+00:00', 114429, 3),
(71, '2018-10-17 20:12:43+00:00', 114431, 79),
(71, '2019-11-06 00:51:14+00:00', 216009, 100),
(71, '2019-11-06 00:51:14+00:00', 216011, 150),
(71, '2019-11-06 00:51:14+00:00', 216013, 180),
(92, '2019-12-01 00:51:14+00:00', 114430, 19),
(92, '2019-12-01 00:51:14+00:00', 114433, 79),
(92, '2019-12-01 00:51:14+00:00', 114434, 100),
], columns=['id', 'start_time', 'sequence_no', 'value'])
# create a ranges df with groupby and agg
ranges = df.groupby(['start_time', 'id'])['sequence_no'].agg([('sequence_min', np.min), ('sequence_max', np.max)])
# map with range to create the sequence number rnage
ranges['sequence_no'] = list(map(lambda x,y: range(x,y), ranges.pop('sequence_min'), ranges.pop('sequence_max')+1))
# explode you DataFrame
new_df = ranges.explode('sequence_no')
# merge new_df and df
merge = new_df.reset_index().merge(df, on=['start_time', 'id', 'sequence_no'], how='left')
# interpolate and assign values
merge['value'] = merge['value'].interpolate()
start_time id sequence_no value
0 2018-10-17 20:12:43+00:00 71 114428 3.0
1 2018-10-17 20:12:43+00:00 71 114429 3.0
2 2018-10-17 20:12:43+00:00 71 114430 41.0
3 2018-10-17 20:12:43+00:00 71 114431 79.0
4 2019-11-06 00:51:14+00:00 71 216009 100.0
5 2019-11-06 00:51:14+00:00 71 216010 125.0
6 2019-11-06 00:51:14+00:00 71 216011 150.0
7 2019-11-06 00:51:14+00:00 71 216012 165.0
8 2019-11-06 00:51:14+00:00 71 216013 180.0
9 2019-12-01 00:51:14+00:00 92 114430 19.0
10 2019-12-01 00:51:14+00:00 92 114431 39.0
11 2019-12-01 00:51:14+00:00 92 114432 59.0
12 2019-12-01 00:51:14+00:00 92 114433 79.0
13 2019-12-01 00:51:14+00:00 92 114434 100.0
답변
더 짧은 버전의 merge
솔루션 :
df.groupby(['start_time', 'id'])['sequence_no']\
.apply(lambda x: np.arange(x.min(), x.max() + 1))\
.explode().reset_index()\
.merge(df, on=['start_time', 'id', 'sequence_no'], how='left')\
.interpolate()
산출:
start_time id sequence_no value
0 2018-10-17 20:12:43+00:00 71 114428 3.0
1 2018-10-17 20:12:43+00:00 71 114429 3.0
2 2018-10-17 20:12:43+00:00 71 114430 41.0
3 2018-10-17 20:12:43+00:00 71 114431 79.0
4 2019-11-06 00:51:14+00:00 71 216009 100.0
5 2019-11-06 00:51:14+00:00 71 216010 125.0
6 2019-11-06 00:51:14+00:00 71 216011 150.0
7 2019-11-06 00:51:14+00:00 71 216012 165.0
8 2019-11-06 00:51:14+00:00 71 216013 180.0
9 2019-12-01 00:51:14+00:00 92 114430 19.0
10 2019-12-01 00:51:14+00:00 92 114431 39.0
11 2019-12-01 00:51:14+00:00 92 114432 59.0
12 2019-12-01 00:51:14+00:00 92 114433 79.0
13 2019-12-01 00:51:14+00:00 92 114434 100.0
답변
reindex
사용하지 않는 다른 솔루션 explode
:
result = (df.groupby(["id","start_time"])
.apply(lambda d: d.set_index("sequence_no")
.reindex(range(min(d["sequence_no"]),max(d["sequence_no"])+1)))
.drop(["id","start_time"],axis=1).reset_index()
.interpolate())
print (result)
#
id start_time sequence_no value
0 71 2018-10-17 20:12:43+00:00 114428 3.0
1 71 2018-10-17 20:12:43+00:00 114429 3.0
2 71 2018-10-17 20:12:43+00:00 114430 41.0
3 71 2018-10-17 20:12:43+00:00 114431 79.0
4 71 2019-11-06 00:51:14+00:00 216009 100.0
5 71 2019-11-06 00:51:14+00:00 216010 125.0
6 71 2019-11-06 00:51:14+00:00 216011 150.0
7 71 2019-11-06 00:51:14+00:00 216012 165.0
8 71 2019-11-06 00:51:14+00:00 216013 180.0
9 92 2019-12-01 00:51:14+00:00 114430 19.0
10 92 2019-12-01 00:51:14+00:00 114431 39.0
11 92 2019-12-01 00:51:14+00:00 114432 59.0
12 92 2019-12-01 00:51:14+00:00 114433 79.0
13 92 2019-12-01 00:51:14+00:00 114434 100.0