For almost all human history, the main concern about food-centered around one goal: to get enough of it. Only in the past few decades has food ceased to be a limited resource for many. Today, food is abundant for most - but not all - inhabitants of high- and middle-income countries and its role has correspondingly changed. Whereas the primary goal of food used to be to provide sufficient energy, today, the main public health challenges are the avoidance of excessive calories and the nutritional composition of diets.
Recognizing food from images is an extremely useful tool for various use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants but had to rely on food frequency questionnaires that are known to be imprecise.
Image-based food recognition has made substantial progress thanks to advances in deep learning in the past few years. But food recognition remains a difficult problem for a variety of reasons.
This is the 3rd consecutive year we are hosting this benchmark on AIcrowd. This benchmark builds upon the success of the 2019/2020/2021 Food Recognition Challenge.
The goal of this benchmark is to train models which can look at images of food items and detect the individual food items present in them. We use a novel dataset of food images collected through the MyFoodRepo app, where numerous volunteer Swiss users provide images of their daily food intake in the context of a digital cohort called Food & You. This growing data set has been annotated - or automatic annotations have been verified - with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight/volume estimation.
This is an evolving dataset, where we will continue to release more data as the dataset grows over time.
Finding annotated food images is difficult. There are some databases with some annotations, but they tend to be limited in important ways.
To put it bluntly: most food images on the internet are a lie. Search for any dish, and you’ll find beautiful stock photography of that particular dish. Same on social media: we share photos of dishes with our friends when the image is exceptionally beautiful. But algorithms need to work on real-world images. In addition, annotations are generally missing - ideally, food images would be annotated with proper segmentation, classification, and volume/weight estimates. With this 2022 iteration of the Food Recognition Benchmark, we release
v2.0 of the
MyFoodRepo dataset, containing a training set of
39,962 images food items, with
The dataset for the AIcrowd Food Recognition Benchmark is available at https://www.aicrowd.com/challenges/food-recognition-benchmark-2022/dataset_files
This dataset contains :
- public_training_set_release_2.0.tar.gz: This is the Training Set of
39,962(as RGB images) food images, along with their corresponding
498food classes in MS-COCO format
- public_validation_set_2.0.tar.gz: This is the suggested Validation Set of
1000(as RGB images) food images, along with their corresponding
498food classes in MS-COCO format
- public_test_release_2.0.tar.gz: This is the Public Test Set for Food Recognition Benchmark 2022: Round 1.
To get started, we would advise you to download all the files and untar them inside the data/ folder of this repository so that you have a directory structure like this:
💪 An open benchmark
For all the reasons mentioned above, food recognition is a difficult but important problem. Algorithms that could tackle this problem would be extremely useful for everyone. That is why we are establishing this open benchmark for food recognition. The goal is simple: provide high-quality data, and get developers around the world excited about addressing this problem in an open way.
Because of the complexity of the problem, a one-shot approach won’t work. This is a benchmark for the long run.
If you are interested in providing more annotated data, please contact us.
This is an ongoing, multi-round benchmark. The specific tasks and/or datasets will be updated at each round, and each round will have its own prizes. You can participate in multiple rounds or in single rounds.
- Round 1: December 20th, 2021 - February 20th, 2022
- Round 2: March 1st, 2022 - May 1st, 2022
🥶 Team freeze deadline for Round 1: 10th February 2022, 12:00 UTC
👥 Participation Routes
There are 2 routes of participating in the challenge.
|Quick Participation 🏃||Active Participation 👨💻|
|You need to upload prediction json files||You need to submit code (and AIcrowd evaluators runs the code to generate predictions)|
|Scores are computed on 40% of the publicly released test set||Scores are computed on 100% of the publicly released test set + 40% of the (unreleased) extended test set|
|You are not eligible for the final leaderboard (and prizes)||You are eligible for the final leaderboard and prizes|
The flow for Active Participation look as follows:
🏁 first-to-cross prize
The First Participant or Team to reach an AP of 0.47 on the leaderboard will receive a DJI Mavic Mini 2 as prize! 🎉
(This prize is awarded to the first such participant/team in active participation track, and is valid through both the rounds of the challenge)
💪 Round 2 prizes (Active participation track)
🥇 1st Prize: DJI FPV Drone Combo
🥈 2nd Prize: DJI Mavic Air 2
🥉 3rd Prize: Oculus Quest 2
The prizes will be awarded based on the final leaderboard for Round 2.
📝 Paper Authorships
Top participants from Round 1 and Round 2 of the Benchmark will be invited to be co-authors of the dataset release paper and the challenge solution paper.
If you have any questions, please let us know on the challenge forum.
You can find more details on making a submission to the benchmark in the official starter kit here.
🖊 Evaluation Criteria
The benchmark uses the official detection evaluation metrics used by COCO.
The primary evaluation metric is
AP @ IoU=0.50:0.05:0.95. The seconday evaluation metric is
AR @ IoU=0.50:0.05:0.95.
A further discussion about the evaluation metric can be found here.
The AIcrowd team talked to the previous winners of the benchmark on their experience of participating in the benchmark, brief notes on their approaches, and their tips to fellow participants. There are many interesting snippets in their stories that you may want to check out!
- Gaurav Singhal on Winning Round 4 of the previous edition of Food Recognition Challenge
- Rohit and Shraddha on their journey from Chennai, India, to Switzerland
- Read Mark Potanin's advice on how he improves his scores!
🙋 Frequently Asked Questions
Who can participate in this benchmark?
- Anyone. This benchmark is open to everyone.
Do I need to take part in all the rounds?
- No. Each round has separate prizes. You can participate in any one of the rounds or all of them.
I am a beginner. Where do I start?
What is team size?
- Each team can have a maximum of 5 members.
Do I have to pay to participate?
- No. Participation is free and open to all.
Is there a private test set?
- Yes. The test set given in the Resources section is only for local evaluation. You are required to submit a repository that is run against a private test set. Please read the starter kit for more information.
How do I upload my model to GitLab?
- To upload your models, please use Git Large File Storage.
- Head over to the Discussions Forum and feel free to ask!
🍕 Food Recognition Benchmark : Data Exploration & Baseline