Oriane Siméoni

Research Scientist at Meta FAIR
Located in Paris, France
download
resume

Short Bio

Oriane Siméoni is a Research Scientist in the team DINO of Meta FAIR specializing in computer vision with a focus on methods that require no human-made annotation. Previously, she spent over three years as part of the research team valeo.ai, where she was already concentrating on vision solutions using little to no annotation. She did her PhD at Inria Rennes--Bretagne Atlantique under the supervision of Yannis Avrithis and Guillaume Gravier. During her PhD, she performed two internships at Twitter and Google and visited the Visual Recognition Group (CVUT).

News

  • 05/24 I am for the first time Area Chair @ECCV'24.
  • 02/24 How to distill high-quality 3D representation from 2D self-sup. backbones? Gilles Puy will present ScaLR at CVPR'24. >>
    • Three Pillars improving Vision Foundation Model Distillation for Lidar,
      G. Puy, S. Gidaris, A. Boulch, O. Siméoni, C. Sautier, P. Pérez, A. Bursuc, R. Marlet
      CVPR 2024 paper code
  • 12/23 Our new work on efficient CLIP densification is out, check out CLIP-DINOiser !
  • 12/23 Antonin Vobecký is presenting POP-3D @NeurIPS'23. >>
    • POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images,
      A. Vobecký, O. Siméoni, D. Hurych, S. Gidaris, A. Bursuc, P. Perez and J. Sivic
      NeurIPS 2023 paper page code
  • 12/23 I had the honour to give a talk at 46th PRCV Colloquium about object localization .
  • 10/23 CLIP-DIY is accepted at WACV'24. Congrats Monika Wysoczanska! >>
    • CLIP-DIY: CLIP Dense Inference Yields Open-Vocabulary Semantic Segmentation For-Free,
      M. Wysoczańska, M. Ramamonjisoa, T. Trzciński and O. Siméoni.
      WACV 2024 paper code
  • 10/23 Our survey about Unsupervised Object Localization w. SSL ViTs is out ! paper awesome page
  • 10/23 I am Logistic Chair @ICCV'23, see you in Paris!
  • 09/23 Keynote talk at ECMLPKDD Workshop "Adapting to Change" .
  • 07/23 Our paper SeedAL is accepted at ICCV'23. >>
    • You Never Get a Second Chance To Make a Good First Impression: Seeding Active Learning for 3D Semantic Segmentation,
      N. Samet, O. Siméoni, G. Puy, G. Ponimatkin, R. Marlet and V. Lepetit.
      ICCV 2023 paper code
  • 06/23 Our tutorial @CVPR'23 about object localization for free is now online .
  • 03/23 The code and demo of our paper FOUND are out.
  • 02/23 Our tutorial about unsupervised object localization is accepted at CVPR'23, check it out!
  • 02/23 FOUND (running at 80 FPS) for unsupervised object localization is accepted at CVPR'23. >>
    • Unsupervised Object Localization: Observing the Background to Discover Objects,
      O. Siméoni, C. Sekkat, G. Puy, A. Vobecky, E. Zablocki and P. Pérez.
      CVPR 2023 page paper code demo
  • 11/22 I had the honor to be invited jury member at the excellent PhD defense of Huy V. Vo. Congrats Huy !
  • 06/22 Two papers (incl. one oral) accepted to ECCV'22. >>
    • Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation,
      A. Vobecky, D. Hurych, O. Siméoni, S. Gidaris, A. Bursuc, P. Pérez and J. Sivic.
      ECCV 2022 (oral) page paper code
    • Active Learning Strategies for Weakly-Supervised Object Detection,
      H. V. Vo, O. Siméoni, S. Gidaris, A. Bursuc, P. Pérez and J. Ponce.
      ECCV 2022 page paper code
  • 04/22 I am an Emergency Reviewer at ECCV'22.
  • 03/22 I am an Outstanding Reviewer at CVPR'22.
  • 11/21 The code of our paper LOST is out here.
  • 09/21 Our paper LOST accepted to BMVC'21. >>
    • Localizing Objects with Self-Supervised Transformers and no Labels,
      Oriane Siméoni, Gilles Puy, Huy V Vo, Simon Roburin, Spyros Gidaris, Andrei Bursuc, Patrick Pérez, Renaud Marlet, Jean Ponce.
      BMVC 2021 paper code

Activities


  • [09/23] I am a Logistic Chair at ICCV'23, Paris, France.
  • [06/23] Organization of the CVPR'23 Tutorial object localization for free, Vancouver, Canada.
  • [01/23] Organization of valeo.ai internal workshop VAST, Paris, France.
  • Outstanding reviewer award CVPR'22
  • Active conference reviewer: CVPR [2020-2024], ECCV [2022], ICCV [2023], NeurIPSW [2021]
  • Active journal reviewer: TPAMI [2021-2023], IJCV [2022-2024], CVIU [2022-2023], IEEE RA [2020]
  • Conference Area Chair: ECCV [2024]
  • PhD jury member: Huy V. Vo (invited member, 2022), Romain Loiseau (examiner, 2023)

Talks


Material noted are licensed under the Creative Commons BY-NC-SA 4.0 International License. They may be used for purposes that are not lucrative.

Research


Three Pillars improving Vision Foundation Model Distillation for Lidar

Gilles Puy, Spyros Gidaris, Alexandre Boulch, Oriane Siméoni, Corentin Sautier, Patrick Pérez, Andrei Bursuc, Renaud Marlet
CVPR, 2024
paper code BibTeX
@inproceedings{puy24scalr,
 title={Three Pillars improving Vision Foundation Model Distillation for Lidar},
 author={Puy, Gilles and Gidaris, Spyros and Boulch, Alexandre and Sim\'eoni, Oriane and Sautier, Corentin and P\'erez, Patrick and Bursuc, Andrei and Marlet, Renaud},
 booktitle={CVPR},
 year={2024} }

CLIP-DINOiser: Teaching CLIP a few DINO tricks

Monika Wysoczańska, Oriane Siméoni, Michaël Ramamonjisoa, Andrei Bursuc, Tomasz Trzciński and Patrick Pérez
ECCV, 2024
paper page code BibTeX
@article{wysoczanska2023clipdino,
 title={CLIP-DINOiser: Teaching CLIP a few DINO tricks},
 author={Wysocza{\'n}ska, Monika and Sim{\'e}oni, Oriane and Ramamonjisoa, Micha{\"e}l and Bursuc, Andrei and Trzci{\'n}ski, Tomasz and P{\'e}rez, Patrick},
 journal={arXiv preprint arXiv:2312.12359},
 year={2023}
}

Unsupervised Object Localization in the Era of Self-Supervised ViTs: A Survey

Oriane Siméoni, Eloi Zablocki, Spyros Gidaris, Gilles Puy and Patrick Pérez
IJCV, 2024
paper Awesome page BibTeX
@article{simeoni2023unsupervised,
 title={Unsupervised Object Localization in the Era of Self-Supervised ViTs: A Survey},
 author={Sim{\'e}oni, Oriane and Zablocki, {\'E}loi and Gidaris, Spyros and Puy, Gilles and P{\'e}rez, Patrick},
 journal={arXiv preprint arXiv:2310.12904},
 year={2023}
}

CLIP-DIY: CLIP Dense Inference Yields Open-Vocabulary Semantic Segmentation For-Free

Monika Wysoczańska, Michaël Ramamonjisoa, Tomasz Trzciński, Oriane Siméoni
WACV, 2024
paper code BibTeX
@inproceedings{wysoczanska2023clipdiy,
 title={CLIP-DIY: CLIP Dense Inference Yields Open-Vocabulary Semantic Segmentation For-Free},
 author={Wysocza{\'n}ska, Monika and Ramamonjisoa, Micha{\"e}l and Trzci{\'n}ski, Tomasz and Sim{\'e}oni, Oriane},
 journal={IEEE Winter Conference on Applications of Computer Vision (WACV)},
 year={2024}
}

POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images

Antonín Vobecký, Oriane Siméoni, David Hurych, Spyros Gidaris, Andrei Bursuc, Patrick Perez, Josef Sivic
NeurIPS, 2023
paper page code BibTeX
@inproceedings{vobecky2023pop3d,
 title={POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images},
 author={Wysocza{\'n}ska, Monika and Ramamonjisoa, Micha{\"e}l and Trzci{\'n}ski, Tomasz and Sim{\'e}oni, Oriane},
 journal={IEEE Winter Conference on Applications of Computer Vision (WACV)},
 year={2024}
}

MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments

Spyros Gidaris, Andrei Bursuc, Oriane Simeoni, Antonin Vobecky, Nikos Komodakis, Matthieu Cord, Patrick Pérez
arxiv, 2023
paper BibTeX
@article{gidaris2023moca,
 title={MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments},
 author={Gidaris, Spyros and Bursuc, Andrei and Simeoni, Oriane and Vobecky, Antonin and Komodakis, Nikos and Cord, Matthieu and P{\'e}rez, Patrick},
 journal={arXiv preprint arXiv:2307.09361},
 year={2023}
}

You Never Get a Second Chance To Make a Good First Impression: Seeding Active Learning for 3D Semantic Segmentation

Nermin Samet, Oriane Siméoni, Gilles Puy, Georgy Ponimatkin, Renaud Marlet and Vincent Lepetit
ICCV, 2023
paper code BibTeX
@inproceedings{samet2023seedal,
 author = {Samet, Nermin and Sim{\'e}oni, Oriane and Puy,Gilles and Ponimatkin, Georgy and Marlet,Renaud and Lepetit, Vincent},
 title = {You Never Get a Second Chance To Make a Good First Impression: Seeding Active Learning for 3D Semantic Segmentation},
 booktitle = {IEEE International Conference on Computer Vision (ICCV)},
 year = {2023},
}

Unsupervised Object Localization: Observing the Background to Discover Objects

Oriane Siméoni, Chloé Sekkat, Gilles Puy, Antonin Vobecky, Eloi Zablocki and Patrick Pérez
CVPR, 2023
page paper code demo BibTeX
@inproceedings{simeoni2023found,
 author = {Sim{\'e}oni, Oriane and Sekkat, Chlo{\'e} and Puy, Gilles and Vobecky, Antonin and Zablocki, {\'E}loi and P{\'e}rez, Patrick},
 title = {Unsupervised Object Localization: Observing the Background to Discover Objects},
 booktitle = {IEEE Conference on Computer Vision and Pattern Recognition, {CVPR}},
 year = {2023},
}

Active Learning Strategies for Weakly-Supervised Object Detection

Huy V. Vo, Oriane Siméoni, Spyros Gidaris, Andrei Bursuc, Patrick Pérez and Jean Ponce
ECCV, 2022
page paper code BibTeX
@inproceedings{vo2022bib,
 title = {Active Learning Strategies for Weakly-Supervised Object Detection},
 author = {Vo, Huy V. and Sim{\'e}oni, Oriane and Gidaris, Spyros and Bursuc, Andrei and P{\'e}rez, Patrick and Ponce, Jean},
 journal = {Proceedings of the European Conference on Computer Vision {(ECCV)}},
 month = {October},
 year = {2022}
}

Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation

Antonin Vobecky, David Hurych, Oriane Siméoni, Spyros Gidaris, Andrei Bursuc, Patrick Pérez and Josef Sivic
ECCV, 2022 (oral presentation)
page paper code BibTeX
@inproceedings{vobecky2022drivesegment,
 title={Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation},
 author={Antonin Vobecky and David Hurych and Oriane Sim{\'e}oni and Spyros Gidaris and Andrei Bursuc and Patrick P{\'e}rez and Josef Sivic},
 journal = {Proceedings of the European Conference on Computer Vision {(ECCV)}},
 month = {October},
 year = {2022}
}

Localizing Objects with Self-Supervised Transformers and no Labels

Oriane Siméoni, Gilles Puy, Huy V Vo, Simon Roburin, Spyros Gidaris, Andrei Bursuc, Patrick Pérez, Renaud Marlet and Jean Ponce
BMVC, 2021
paper code BibTeX
@inproceedings{simeoni2021LOST,
 title = {Localizing Objects with Self-Supervised Transformers and no Labels},
 author = {Oriane Sim\'eoni and Gilles Puy and Huy V. Vo and Simon Roburin and Spyros Gidaris and Andrei Bursuc and Patrick P\'erez and Renaud Marlet and Jean Ponce},
 journal = {Proceedings of the British Machine Vision Conference (BMVC)},
 month = {November},
 year = {2021}
}

Robust image representation for classification, retrieval and object discovery

Oriane Siméoni
PhD thesis
thesis BibTeX
@phdthesis{simeoni2020robust,
 title={Robust image representation for classification, retrieval and object discovery},
 author={Sim{\'e}oni, Oriane},
 year={2020},
 school={Rennes 1}
}

Rethinking deep active learning: Using unlabeled data at model training

Oriane Siméoni, Mateusz Budnik, Yannis Avrithis and Guillaume Gravier
ICPR, 2020
paper code BibTeX
@inproceedings{simeoni2020rethinking,
 title = {Rethinking deep active learning: Using unlabeled data at model training},
 author = {Oriane Sim\'eoni and Mateusz Budnik and Yannis Avrithis and Guillaume Gravier},
 booktitle = {Proceedings of International Conference on Pattern Recognition (ICPR)},
 month = {December},
 address = {Virtual},
 year = {2020}
}

Local Features and Visual Words Emerge in Activations

Oriane Siméoni, Yannis Avrithis and Ondrej Chum
CVPR, 2019
paper code BibTeX
@inproceedings{simeoni2019SAC,
 author = {Sim{\'e}oni Oriane and Avrithis, Yannis and Chum, Ondrej},
 title = {Local Features and Visual Words Emerge in Activations},
 booktitle = {CVPR},
 year = {2019}
}

Graph-based particular object discovery

Oriane Siméoni, Ahmet Iscen, Giorgos Tolias, Yannis Avrithis and Ondrej Chum
MVA, 2019
paper BibTeX
@article{simeoni2019graph,
 title={Graph-based particular object discovery},
 author={Sim{\'e}oni, Oriane and Iscen, Ahmet and Tolias, Giorgos and Avrithis, Yannis and Chum, Ond{\v{r}}ej},
 journal={Machine Vision and Applications},
 volume={30},
 number={2},
 pages={243--254},
 year={2019},
 publisher={Springer}
}

Unsupervised object discovery for instance recognition

Oriane Siméoni, Ahmet Iscen, Giorgos Tolias, Yannis Avrithis and Ondrej Chum
WACV, 2018
paper video BibTeX
@inproceedings{simeoni2018uod,
 title={Unsupervised object discovery for instance recognition},
 author={Sim{\'e}oni, Oriane and Iscen, Ahmet and Tolias, Giorgos and Avrithis, Yannis and Chum, Ondrej},
 journal={IEEE Winter Conference on Applications of Computer Vision (WACV)},
 year={2018}
}

Tracking global gene expression responses in T cell differentiation

Oriane Siméoni, Vincent Piras, M. Tomita and Kumar Selvarajoo
Gene, 2015
paper BibTeX
@article{simeoni2015tracking,
 title={Tracking global gene expression responses in T cell differentiation},
 author={Simeoni, Oriane and Piras, Vincent and Tomita, Masaru and Selvarajoo, Kumar},
 journal={Gene},
 volume={569},
 number={2},
 pages={259--266},
 year={2015},
 publisher={Elsevier}
}

Longer Presentation


I am currently working as a Research Scientist in the research team valeo.ai, focusing on perception issues. My research focuses on computer vision. I study methods to compute robust visual representations whilst having the aim of using less supervision and exploiting more available data. In particular I have worked on classification, detection and retrieval tasks. I am interested in image representation, multimodal learning and machine learning in general.

I have defended my PhD on the 10th of September 2020 (delayed due to COVID). I have had the honor of been reviewed by Pr. Patrick Pérez and Pr. Josef Sivic and examined by Pr. Cordelia Schmid, Dr. Hervé Jégou, Dr. Diane Larlus, Dr. Elisa Fromont and Dr. Ondřej Chum (as invited member). I have performed my PhD at Inria Rennes--Bretagne Atlantique under the supervision of Yannis Avrithis and Guillaume Gravier. During my PhD, I have performed two internships at Twitter and Google and visited the Visual Recognition Group (CVUT).

Last updated: 29th of May 2024

This website is my own production styled using Bootstrap, Font Awesome and Google Fonts.