The availability of deepfake generation tools makes their creation and dissemination accessible. Moreover, the performance of these models produces results that can deceive humans. The project CI2(IA) aims at verifying the trustworthiness of images.
Objectives:
- Domain Adaptation for Digital Image Forensics
- Availability of real and annotated data is not possible. Thus model generalization is really hard.
- Ethical Deepfakes
- Embedding watermark during deepfake generation allows us to easily authenticate a deepfake as such.
- Semantic Analysis of Forged Images
- Detect image manipulation and understand the semantic of the change.
Contact us:
You can contact the member of the project through my address: vincent.itier at imt-nord-europe fr
© 2024 CI2(IA)