Import fixes#161
Conversation
| annotations = new_entry | ||
| else: | ||
| annotations = annotations.append(new_entry) | ||
| annotations = pd.concat([annotations, new_entry]) |
There was a problem hiding this comment.
``So for concat, its a bit slower if we append each row to a dataframe. According to the docs its slightly better to instead save each new_entry to a list then concat the list.
So basically
list = []
for i in etc:
list.append(some new row for a future df)
df = pd.concat(list)
There was a problem hiding this comment.
Got it. TBH I was just doing a find and replace for this PR, didn't fully look at all the context. I'll go back and make the necessary changes
Sean1572
left a comment
There was a problem hiding this comment.
See comments. Make sure to make a list of rows then concat so we don't call concat over and over again in each for loop. According to https://pandas.pydata.org/docs/reference/api/pandas.concat.html its better practice
|
|
||
| # Open file with librosa (uses ffmpeg or libav) | ||
| print("Path: ", path) | ||
| # Open file with librosa (uses ffmanaeg or libav) |
There was a problem hiding this comment.
Change ffmanaeg back to ffmpeg
| statistics_df = clip_stats_df | ||
| else: | ||
| statistics_df = statistics_df.append(clip_stats_df) | ||
| statistics_df = pd.concat([statistics_df,clip_stats_df]) |
There was a problem hiding this comment.
Same note about concatenation here in pandas
| start_time = time.time() | ||
| if num_errors > 0: | ||
| checkVerbose("Something went wrong with" + num_errors + "clips out of" + str(len(clips)) + "clips", verbose) | ||
| checkVerbose(f"Something went wrong with {num_errors} clips out of {len(clips)} clips", verbose) |
There was a problem hiding this comment.
Make sure to remove pyha tutorial from PR before submit
Added resampy to toml and lock files
Corrected pd.df.append to pd.concat
other minor fixes