egrace479 commited on
Commit
e2e760e
·
verified ·
1 Parent(s): c7db1ba

Add paired py file for notebook from first analysis

Browse files
Files changed (1) hide show
  1. notebooks/lilabc_CT.py +571 -0
notebooks/lilabc_CT.py ADDED
@@ -0,0 +1,571 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ---
2
+ # jupyter:
3
+ # jupytext:
4
+ # formats: ipynb,py:percent
5
+ # text_representation:
6
+ # extension: .py
7
+ # format_name: percent
8
+ # format_version: '1.3'
9
+ # jupytext_version: 1.16.0
10
+ # kernelspec:
11
+ # display_name: std
12
+ # language: python
13
+ # name: python3
14
+ # ---
15
+
16
+ # %%
17
+ import pandas as pd
18
+ import seaborn as sns
19
+
20
+ sns.set_style("whitegrid")
21
+
22
+ # %%
23
+ df = pd.read_csv("../data/lila_image_urls_and_labels.csv")
24
+ df.head()
25
+
26
+ # %%
27
+ df.columns
28
+
29
+ # %%
30
+ df.annotation_level.value_counts()
31
+
32
+ # %% [markdown]
33
+ # Annotation level indicates iimage vs sequence (or unknown), not analogous to `taxonomy_level` from lila-taxonomy-mapping_release.csv. It seems `original_label` may be the analogous column.
34
+
35
+ # %%
36
+ df.sample(10)
37
+
38
+ # %%
39
+ df.info(show_counts = True)
40
+
41
+ # %%
42
+ df.nunique()
43
+
44
+ # %% [markdown]
45
+ # We have 667 unique species indicated (matches lila-taxonomy-mapping_release.csv after dropping humans, seems to have two more distinct genera though).
46
+
47
+ # %%
48
+ #check for humans
49
+ df.loc[df.species == "homo sapien"]
50
+
51
+ # %% [markdown]
52
+ # Let's start by removing entries with `original_label`: `empty`.
53
+
54
+ # %%
55
+ df_cleaned = df.loc[df.original_label != "empty"]
56
+
57
+ # %% [markdown]
58
+ # We started with 16,833,848 entries, and are left with 8,448,597 after removing all labeled as `empty`, so about half the images.
59
+ #
60
+ # Note that there are still about 2 million that don't have the species label, 1.5 million that are missing genus designation.
61
+
62
+ # %%
63
+ df_cleaned[['genus', 'species']].info(show_counts = True)
64
+
65
+ # %%
66
+ df_cleaned[['genus', 'species']].nunique()
67
+
68
+ # %%
69
+ df_cleaned.loc[df_cleaned['genus'].isna(), 'species'].value_counts()
70
+
71
+ # %% [markdown]
72
+ # All entries missing `genus` are also missing `species`, so we'll drop all entries with null `genus`.
73
+
74
+ # %%
75
+ df_genusSpecies = df_cleaned.dropna(subset = "genus")
76
+
77
+ # %%
78
+ df_genusSpecies.info(show_counts = True)
79
+
80
+ # %%
81
+ df_genusSpecies.nunique()
82
+
83
+ # %% [markdown]
84
+ # This leaves us with 476 unique genera and 667 unique species among the remaining 6,978,606 entries in the 18 datasets.
85
+ #
86
+ # There are about 77,000 non-unique URLs. Does this mean there are duplicated entries for images or are they sequences? This is the same number of unique `image_id`, so unclear.
87
+
88
+ # %%
89
+ df_genusSpecies.duplicated(subset = ['url'], keep = False).value_counts()
90
+
91
+ # %% [markdown]
92
+ # This tracks with the rough estimate, as we see a total of 152,889 entries that are duplicates of each other, with a total of 76,787 that have been duplicated (see below).
93
+
94
+ # %%
95
+ df_genusSpecies.duplicated(subset = ['url'], keep = 'first').value_counts()
96
+
97
+ # %%
98
+ df_genusSpecies.duplicated(subset = ['url', 'image_id'], keep = 'first').value_counts()
99
+
100
+ # %% [markdown]
101
+ # These match the duplicated `image_id`s as well. Perhaps this is more than one animal in the frame? In which case it is important to mark all the potential duplicates as duplicates, not just the instances after the image's first occurence.
102
+
103
+ # %%
104
+ df_genusSpecies['url_dupe'] = df_genusSpecies.duplicated(subset = ['url'], keep = False)
105
+
106
+ # %%
107
+ df_genusSpecies.sample(10)
108
+
109
+ # %%
110
+ duplicated_urls = df_genusSpecies.loc[df_genusSpecies.url_dupe]
111
+ duplicated_urls.head(10)
112
+
113
+ # %% [markdown]
114
+ # From this small sample, it looks like they are distinguished by their frame number (`frame_num`) and the animal itself--there is more than one animal (of a different species) in frame so it duplicates the image information because there is an entry for each species of animal in the image.
115
+
116
+ # %% [markdown]
117
+ # We have 1,046,985 distinct sequence IDs for the 6,901,819 unique image IDs, suggesting an average of 6 images per sequence?
118
+
119
+ # %%
120
+ df_genusSpecies.head(1).values
121
+
122
+ # %% [markdown]
123
+ # The end of the URL `cct_images/...jpg` matches the `file` indicator in the model results from LILA BC for MegaDetector. We will use agreement about the animal's precense from both [v5a and v5b models](https://lila.science/megadetector-results-for-camera-trap-datasets/) (v4 for Snapshot Serengeti) to determine appropriate images from a sequence to keep (only one per sequence to avoid testing on the same animal (instance) multiple times).
124
+ #
125
+ # First, let's remove the images that have multiple animals, as indicated by the `url_dupe` column.
126
+
127
+ # %%
128
+ dedupe_genusSpecies = df_genusSpecies.loc[~df_genusSpecies.url_dupe]
129
+ dedupe_genusSpecies.head()
130
+
131
+ # %% [markdown]
132
+ # Let's quickly check our stats on this subset (eg, `speices`/`genus` values).
133
+
134
+ # %%
135
+ dedupe_genusSpecies.info(show_counts = True)
136
+
137
+ # %%
138
+ cols = list(dedupe_genusSpecies.columns[:11])
139
+ cols.append('genus')
140
+ cols.append('species')
141
+ dedupe_genusSpecies[cols].nunique()
142
+
143
+ # %% [markdown]
144
+ # All datasets are still represented, all URLs and `image_id`s are unique, and we still have 667 distinct species.
145
+ #
146
+ # We should probably drop the null `species` values since we have such a large number of images and prefer to keep it more specific.
147
+
148
+ # %%
149
+ dedupe_species = dedupe_genusSpecies.loc[dedupe_genusSpecies.species.notna()]
150
+ dedupe_species.head()
151
+
152
+ # %% [markdown]
153
+ # Have any of our other stats changed significantly?
154
+
155
+ # %%
156
+ dedupe_species[cols].info(show_counts = True)
157
+
158
+ # %%
159
+ dedupe_species[cols].nunique()
160
+
161
+ # %% [markdown]
162
+ # We lost 17 locations, 24 genera, and 86 common names, but all datasets are still represented.
163
+
164
+ # %%
165
+ #some sample images
166
+ dedupe_species['url'].head().values
167
+
168
+ # %% [markdown]
169
+ # ## Save a Species Label CSV
170
+ #
171
+ # Dataset with all images that have labels down to the species level. We will get some stats on this to determine how to truncate to just one instance of each animal per sequence.
172
+
173
+ # %%
174
+ dedupe_species.to_csv("../data/lila_image_urls_and_labels_species.csv", index = True)
175
+
176
+ # %% [markdown]
177
+ # ### Species Stats
178
+ #
179
+ # Let's get some statistics on this data to help narrow it down:
180
+ # - Do we have instances of common name matching scientific name? There are more unique instances of `common_name` than `species` or `scientific_name`.
181
+ # - What is the minimum number of instances of a particular species? We'll balance the datset to have the smallest number available (assuming 20+ images).
182
+ # - For later evaluation, what's the distribution of time of day (will be more meaningful for the finalized species dataset)?
183
+
184
+ # %%
185
+ dedupe_species['common_species_match'] = list(dedupe_species.common_name == dedupe_species.species)
186
+
187
+ # %%
188
+ dedupe_species['common_sciName_match'] = list(dedupe_species.common_name == dedupe_species.scientific_name)
189
+
190
+ # %%
191
+ dedupe_species.common_species_match.value_counts()
192
+
193
+ # %%
194
+ dedupe_species.common_sciName_match.value_counts()
195
+
196
+ # %% [markdown]
197
+ # Common name is not filled with scientific name or species values for any of these images.
198
+ #
199
+ # Now, let's check the number of each species. Then maybe check scientific name and common name since we have more of these.
200
+
201
+ # %%
202
+ num_species_dict = {}
203
+ for species in dedupe_species.species.unique():
204
+ num_species = len(dedupe_species.loc[dedupe_species.species == species])
205
+ num_species_dict[species] = num_species
206
+ num_species_dict
207
+
208
+ # %%
209
+ #Add this info to the dataset
210
+ for species, num in num_species_dict.items():
211
+ dedupe_species.loc[dedupe_species.species == species, 'num_species'] = num
212
+
213
+ # %%
214
+ more_samples = {}
215
+ for species, num in num_species_dict.items():
216
+ if num >= 40:
217
+ more_samples[species] = num
218
+
219
+ more_samples
220
+
221
+ # %%
222
+ print("We have ", len(more_samples), " species for which there are at least 40 images.")
223
+
224
+ # %%
225
+ mid_samples = {}
226
+ for species, num in num_species_dict.items():
227
+ if num >= 20:
228
+ mid_samples[species] = num
229
+
230
+ print("We have ", len(mid_samples), " species for which there are at least 20 images.")
231
+
232
+ # %% [markdown]
233
+ # ## One Image per Species per Sequence
234
+ #
235
+ # Now let's generate the final dataset we'll use for testing. When there are sequences of images, there's the clear potential to capture multiple images of the same animal, so we'll check for duplicates of `sequence_id` and `species` pairings within datasets and keep the first instance of each species within a sequence. We expect to have between 948,156 and 1,000,000 labeled images as a result.
236
+
237
+ # %%
238
+ dedupe_species['multi-image'] = dedupe_species.duplicated(subset = ['dataset_name', 'sequence_id', 'species', 'location_id'], keep = 'first')
239
+
240
+ # %%
241
+ dedupe_species.head()
242
+
243
+ # %%
244
+ dedupe_species['multi-image'].value_counts()
245
+
246
+ # %%
247
+ species_single_seq = dedupe_species.loc[~dedupe_species['multi-image']]
248
+ species_single_seq.sample(7)
249
+
250
+ # %%
251
+ species_single_seq.to_csv("../data/lila_image_urls_and_labels_SingleSpecies.csv", index = False)
252
+
253
+ # %%
254
+ species_single_seq[cols].nunique()
255
+
256
+ # %% [markdown]
257
+ # Same number of unique common and scientific names.
258
+
259
+ # %% [markdown]
260
+ # #### Check Number of Images per Species
261
+
262
+ # %%
263
+ num_singleSpecies_dict = {}
264
+ for species in species_single_seq.species.unique():
265
+ num_species = len(species_single_seq.loc[species_single_seq.species == species])
266
+ num_singleSpecies_dict[species] = num_species
267
+ num_singleSpecies_dict
268
+
269
+ # %%
270
+ more_singleSamples = {}
271
+ for species, num in num_singleSpecies_dict.items():
272
+ if num >= 40:
273
+ more_singleSamples[species] = num
274
+
275
+ print("We have ", len(more_singleSamples), " species for which there are at least 40 images.")
276
+ print()
277
+
278
+ mid_singleSamples = {}
279
+ for species, num in num_singleSpecies_dict.items():
280
+ if num >= 20:
281
+ mid_singleSamples[species] = num
282
+
283
+ print("We have ", len(mid_singleSamples), " species for which there are at least 20 images.")
284
+
285
+ # %%
286
+ #Add the counts to the dataset for filtering
287
+ for species, num in num_singleSpecies_dict.items():
288
+ species_single_seq.loc[species_single_seq.species == species, 'num_singleSpecies'] = num
289
+
290
+ # %%
291
+ total_more_images = 0
292
+ for species, num in more_singleSamples.items():
293
+ total_more_images += num
294
+
295
+ total_mid_images = 0
296
+ for species, num in mid_singleSamples.items():
297
+ total_mid_images += num
298
+
299
+ # %%
300
+ print("With a 20 image per species cutoff, we have ", total_mid_images, "total images.")
301
+ print("With a 40 image per species cutoff, we have ", total_more_images, "total images.")
302
+
303
+ # %% [markdown]
304
+ # Let's look at the family distribution for these.
305
+
306
+ # %%
307
+ import seaborn as sns
308
+
309
+ sns.set_style("whitegrid")
310
+ sns.set(rc = {'figure.figsize': (10,10)})
311
+
312
+ # %%
313
+ sns.histplot(species_single_seq.loc[species_single_seq.num_singleSpecies >= 40], y = 'family')
314
+
315
+ # %%
316
+ sns.histplot(species_single_seq.loc[species_single_seq.num_singleSpecies >= 20], y = 'family')
317
+
318
+ # %% [markdown]
319
+ # ### Check for images marked as having animal that are actually empty
320
+ # This can occur when images are labeled as a sequence, and it is suggested that it is not an unusual occurance. We will use the [MegaDetector results](https://lila.science/megadetector-results-for-camera-trap-datasets/) provided by LILA BC to run a check for these. Ultimately, we won't use repeated instances of the same animal in the same sequence so the process of removing extra instances may also alleviate this issue. Additionally, it's worth noting that MegaDector is trained to err on the side of detection (i.e., it's more likely to see an animal that's not there), so not likely to change our assessment.
321
+ #
322
+ # #### Caltech Camera Traps
323
+
324
+ # %% [markdown]
325
+ # Look up a couple sample file values in the model results JSON (with and without detection).
326
+
327
+ # %%
328
+ df.loc[df['url'].str.contains("cct_images/5862934b-23d2-11e8-a6a3-ec086b02610b")]
329
+
330
+ # %%
331
+ df.loc[df['url'].str.contains("cct_images/5862934c-23d2-11e8-a6a3-ec086b02610b.jpg")]
332
+
333
+ # %%
334
+ df.loc[df['url'].str.contains("cct_images/5874d5d3-23d2-11e8-a6a3-ec086b02610b.jpg")]
335
+
336
+ # %%
337
+ import json
338
+
339
+ # %%
340
+ dedupe_species.dataset_name.value_counts()
341
+
342
+ # %% [markdown]
343
+ # Most of our images are comingi from NACTI and Snapshot Serengeti, so those may be the more important to focus on, but good to check through. Those two will also take longer. Let's start with the smaller ones to get some preliminary checks. As noted above, it's not likely to change our results.
344
+
345
+ # %%
346
+ mdv5b_files = {"Caltech Camera Traps": "caltech-camera-traps_mdv5a.0.0_results.json",
347
+ "Channel Islands Camera Traps": "channel-islands-camera-traps_mdv5b.0.0_results.json",
348
+ "ENA24": "ena24_mdv5b.0.0_results.json",
349
+ "Idaho Camera Traps": "idaho-camera-traps_mdv5b.0.0_results.json",
350
+ "Island Conservation Camera Traps": "island-conservation-camera-traps_mdv5b.0.0_results.json",
351
+ "Missouri Camera Traps": "missouri-camera-traps_mdv5b.0.0_results.json",
352
+ "Orinoquia Camera Traps": "orinoquia-camera-traps_public_mdv5b.0.0_results.json",
353
+ "Snapshot Camdeboo": "snapshot-safari_CDB_mdv5b.0.0_results.json",
354
+ "Snapshot Enonkishu": "snapshot-safari_ENO_mdv5b.0.0_results.json",
355
+ "Snapshot Karoo": "snapshot-safari_KAR_mdv5b.0.0_results.json",
356
+ "Snapshot Kgalagadi": "snapshot-safari_KGA_mdv5b.0.0_results.json",
357
+ "Snapshot Kruger": "snapshot-safari_KRU_mdv5b.0.0_results.json",
358
+ "Snapshot Mountain Zebra": "snapshot-safari_MTZ_mdv5b.0.0_results.json",
359
+ "SWG Camera Traps": "swg-camera-traps_public_mdv5b.0.0_results.json",
360
+ "WCS Camera Traps": "wcs-camera-traps_animals_mdv5b.0.0_results.json",
361
+ "Wellington Camera Traps": "wellington-camera-traps_images_mdv5b.0.0_results.json"}
362
+
363
+
364
+ # %%
365
+ def filter_md_results(dataset_name, filename):
366
+ with open("../MegaDetector_results/" + filename) as file:
367
+ data = json.load(file)
368
+ df_mdv5b = pd.json_normalize(data["images"], max_level = 1)
369
+ print(df_mdv5b.head())
370
+ dedupe_url = list(dedupe_species.loc[dedupe_species["dataset_name"] == dataset_name, 'url'])
371
+ dedupe_url_empties = []
372
+ for file in list(df_mdv5b.loc[(df_mdv5b['max_detection_conf'] <= 80 & df_mdv5b['detections'].astype(str) != '[]'), 'file']):
373
+ if file in dedupe_url:
374
+ dedupe_url_empties.append(file)
375
+ print(dataset_name, ": ", dedupe_url_empties)
376
+ return dedupe_url_empties
377
+
378
+
379
+ # %% [markdown]
380
+ # ### Earlier check on `[]`
381
+
382
+ # %%
383
+ with open("../MegaDetector_results/caltech-camera-traps_mdv5a.0.0_results.json") as file:
384
+ data = json.load(file)
385
+
386
+ # %%
387
+ data
388
+
389
+ # %%
390
+ cct_mdv5a = pd.json_normalize(data["images"], max_level = 1)
391
+
392
+ # %%
393
+ cct_mdv5a.head(10)
394
+
395
+ # %% [markdown]
396
+ # `file` matches the end of the `url` value in our DataFame. We want to filter out any empty detections that are still in `dedupe_species`. We could also consider lower confidences, but maybe start there.
397
+
398
+ # %%
399
+ len(cct_mdv5a)
400
+
401
+ # %%
402
+ type(cct_mdv5a.detections[5])
403
+
404
+ # %%
405
+ cct_mdv5a.loc[cct_mdv5a['detections'].astype(str) == '[]']
406
+
407
+ # %%
408
+ dedupe_url_cct = list(dedupe_species.loc[dedupe_species['dataset_name'] == "Caltech Camera Traps", 'url'])
409
+ dedupe_cct_empties = []
410
+ for file in list(cct_mdv5a.loc[cct_mdv5a['detections'].astype(str) == '[]', 'file']):
411
+ if file in dedupe_url_cct:
412
+ dedupe_cct_empties.append(file)
413
+
414
+ dedupe_cct_empties
415
+
416
+ # %% [markdown]
417
+ # CCT Seem fine
418
+
419
+ # %% [markdown]
420
+ # #### WCS Camera Traps
421
+ # Since these have a note that empties may be labeled as species within the sequence, let's check this as well.
422
+
423
+ # %%
424
+ with open("../MegaDetector_results/wcs-camera-traps_animals_mdv5a.0.0_results.json") as file:
425
+ data = json.load(file)
426
+
427
+ # %%
428
+ data
429
+
430
+ # %% [markdown]
431
+ # Looks like formatting is consistent
432
+
433
+ # %%
434
+ wcs_mdv5a = pd.json_normalize(data["images"], max_level = 1)
435
+
436
+ # %%
437
+ wcs_mdv5a.head()
438
+
439
+ # %%
440
+ list(dedupe_species.loc[dedupe_species['dataset_name'] == "WCS Camera Traps", 'url'])
441
+
442
+ # %%
443
+ dedupe_url_wcs = list(dedupe_species.loc[dedupe_species['dataset_name'] == "WCS Camera Traps", 'url'])
444
+ dedupe_wcs_empties = []
445
+ for file in list(wcs_mdv5a.loc[wcs_mdv5a['detections'].astype(str) == '[]', 'file']):
446
+ if file in dedupe_url_wcs:
447
+ dedupe_wcs_empties.append(file)
448
+
449
+ dedupe_wcs_empties
450
+
451
+ # %% [markdown]
452
+ # MegaDetector v5b is supposed to have less false positives than v5a, so let's check that and see if we're still in the clear.
453
+
454
+ # %%
455
+ with open("../MegaDetector_results/wcs-camera-traps_animals_mdv5b.0.0_results.json") as file:
456
+ data = json.load(file)
457
+
458
+ # %%
459
+ wcs_mdv5b = pd.json_normalize(data["images"], max_level = 1)
460
+ wcs_mdv5b.head()
461
+
462
+ # %%
463
+ dedupe_wcs_empties = []
464
+ for file in list(wcs_mdv5b.loc[wcs_mdv5b['detections'].astype(str) == '[]', 'file']):
465
+ if file in dedupe_url_wcs:
466
+ dedupe_wcs_empties.append(file)
467
+
468
+ dedupe_wcs_empties
469
+
470
+ # %% [markdown]
471
+ # #### NACTI
472
+
473
+ # %%
474
+ with open("../MegaDetector_results/nacti_mdv5b.0.0_results.json") as file:
475
+ data = json.load(file)
476
+
477
+ # %%
478
+ nacti_mdv5b = pd.json_normalize(data["images"], max_level = 1)
479
+ nacti_mdv5b.head()
480
+
481
+ # %%
482
+ dedupe_url_nacti = list(dedupe_species.loc[dedupe_species['dataset_name'] == "NACTI", 'url'])
483
+ dedupe_nacti_empties = []
484
+ for file in list(nacti_mdv5b.loc[nacti_mdv5b['detections'].astype(str) == '[]', 'file']):
485
+ if file in dedupe_url_nacti:
486
+ dedupe_nacti_empties.append(file)
487
+
488
+ dedupe_nacti_empties
489
+
490
+
491
+ # %% [markdown]
492
+ # #### Check Remaining Datasets
493
+ #
494
+ # Let's automate the process to speed this up with a function to take the dataset and filename and return any marked empty by the model.
495
+
496
+ # %%
497
+ def check_md_results(dataset_name, filename):
498
+ with open("../MegaDetector_results/" + filename) as file:
499
+ data = json.load(file)
500
+ df_mdv5b = pd.json_normalize(data["images"], max_level = 1)
501
+ print(df_mdv5b.head())
502
+ dedupe_url = list(dedupe_species.loc[dedupe_species["dataset_name"] == dataset_name, 'url'])
503
+ dedupe_url_empties = []
504
+ for file in list(df_mdv5b.loc[df_mdv5b['detections'].astype(str) == '[]', 'file']):
505
+ if file in dedupe_url:
506
+ dedupe_url_empties.append(file)
507
+ print(dataset_name, ": ", dedupe_url_empties)
508
+ return dedupe_url_empties
509
+
510
+
511
+ # %%
512
+ mdv5b_files = {"Channel Islands Camera Traps": "channel-islands-camera-traps_mdv5b.0.0_results.json",
513
+ "ENA24": "ena24_mdv5b.0.0_results.json",
514
+ "Idaho Camera Traps": "idaho-camera-traps_mdv5b.0.0_results.json",
515
+ "Island Conservation Camera Traps": "island-conservation-camera-traps_mdv5b.0.0_results.json",
516
+ "Missouri Camera Traps": "missouri-camera-traps_mdv5b.0.0_results.json",
517
+ "Orinoquia Camera Traps": "orinoquia-camera-traps_public_mdv5b.0.0_results.json",
518
+ "Snapshot Camdeboo": "snapshot-safari_CDB_mdv5b.0.0_results.json",
519
+ "Snapshot Enonkishu": "snapshot-safari_ENO_mdv5b.0.0_results.json",
520
+ "Snapshot Karoo": "snapshot-safari_KAR_mdv5b.0.0_results.json",
521
+ "Snapshot Kgalagadi": "snapshot-safari_KGA_mdv5b.0.0_results.json",
522
+ "Snapshot Kruger": "snapshot-safari_KRU_mdv5b.0.0_results.json",
523
+ "Snapshot Mountain Zebra": "snapshot-safari_MTZ_mdv5b.0.0_results.json",
524
+ "SWG Camera Traps": "swg-camera-traps_public_mdv5b.0.0_results.json",
525
+ "Wellington Camera Traps": "wellington-camera-traps_images_mdv5b.0.0_results.json"}
526
+
527
+
528
+ # %%
529
+ empties = {}
530
+ for key in list(mdv5b_files.keys()):
531
+ empties[key] = check_md_results(key, mdv5b_files[key])
532
+
533
+ # %%
534
+ empties
535
+
536
+ # %% [markdown]
537
+ # #### Snapshot Serengeti
538
+ #
539
+ # Snapshot Serengeti was evaluated with MegaDetector v4 due to questions raised by [this issue](https://github.com/ultralytics/yolov5/issues/9294) as noted on [LILA BC's site](https://lila.science/megadetector-results-for-camera-trap-datasets/).
540
+
541
+ # %%
542
+ #Snapshot Serengeti was evaluated with MegaDetector v4
543
+ mdv4_files = ["snapshot-serengeti-mdv4.1.0_results.json/snapshot-serengeti_S" + str(i) + "_mdv4.1.0_results.json" for i in range(1,11)]
544
+ mdv4_files.append("snapshot-serengeti-mdv4.1.0_results.json/snapshot-serengeti_SER_S11_mdv4.1.0_results.json")
545
+
546
+ # %%
547
+ mdv4_empties = {}
548
+ for file in mdv4_files:
549
+ mdv4_empties[file] = check_md_results("Snapshot Serengeti", file)
550
+
551
+ # %%
552
+ #only ran first 4
553
+ for file in mdv4_files[4:]:
554
+ mdv4_empties[file] = check_md_results("Snapshot Serengeti", file)
555
+
556
+ # %% [markdown]
557
+ #
558
+
559
+ # %%
560
+ with open("../MegaDetector_results/snapshot-serengeti-mdv4.1.0_results.json/snapshot-serengeti_S1_mdv4.1.0_results.json") as file:
561
+ data = json.load(file)
562
+
563
+ serengeti_df_mdv4 = pd.json_normalize(data["images"], max_level = 1)
564
+
565
+ # %%
566
+ len(serengeti_df_mdv4.loc[serengeti_df_mdv4.max_detection_conf >= .8])
567
+
568
+ # %%
569
+ len(serengeti_df_mdv4)
570
+
571
+ # %%