Canon Research Centre France S.A.S.

Internship proposals

Internships are proposed in the scope of CRF corporate social responsibility (CSR). The sole aim of internships from CRF is to contribute to the education of interns who will benefit from the expertise of CRF researchers. Each year, CRF is proposing several internships, so do not hesitate to regularly visit this page.

Please fill in the following fields if you want to be contacted:

:

:

5G AIML

Internship : 5/6 months / Preferable Date: February 2024

Internship subject

3GPP is actively developing new specifications that includes artificial intelligence technics to enhance 5G performances particularly at physical layer. The enhancements are studied for the channel status information (CSI) prediction, accurate mobile (UE) position determination and radio beam management.

Mission

In this internship, we intend to develop a machine learning model to handle different goals like for example CSI compression or CSI prediction.
A first objective of the internship is to setup a first simulation environment (e.g. Matlab) modeling a link level simulation showing the performance of the legacy CSI module in 3GPP 5G. Next a second objective is to implement a CNN model (e.g. using Resnet) implementing AIML based CSI compression in a similar way as performed in one 3GPP study report. The results will be analyzed in the light of the simulated legacy performances and to the reported performances of AIML in the 3GPP report.
Finally, the last objective depends on the findings of the previous task. One possibility is to test the possibility of using transformers instead of CNN model. Another possibility is to investigate the possibilities of implementing a system level simulation in Matlab targeting the CSI module.

Academic background

You apply for a Master 2 diploma or an engineering degree in telecommunications. You are curious, open-minded, passionate about new technologies and have real interpersonal skills to integrate you into an innovative and multicultural environment.

Back to top

Document-based retrieval-augmented generation (RAG)

Internship : 5/6 months / Preferable Date: February 2024

Internship subject

ChatGPT is having a profound impact on the way we search for information. With ChatGPT, it is no longer necessary to go through a large number of documents to find the answer to a given question. It is enough to ask a few questions iteratively. Of course, the answer is not always perfect and can sometimes be incorrect.
Another issue is that ChatGPT (and its equivalents) have been trained on publicly available documents. ChatGPT thus cannot take into account locally generated information, which reduces its role considerably in the professional environment.
For example, an insurance company might wish to put a system in place which would allow its clients to check whether they are covered by their individual insurance policy for a specific risk. But an insurance policy, notwithstanding its confidential nature, is specific to a given individual and is likely to evolve over time. It is not realistic to retrain ChatGPT for each new client and for each policy change.

Mission

The aim of the internship is to create a search engine, based on a GPT or equivalent Large Language Model (LLM), that can take into account internally generated documents, without retraining. In a first step, the intern will identify a suitable LLM model and associated toolchain. She/he will develop a pre-filtering module for the internal documents, based on their embeddings; the pre-filtered information being fed to the LLM’s context window. She/He will also develop a document embeddings database that can easily be augmented with new documents. Finally, the intern will evaluate the performance of the complete system on a set of predefined queries.

Academic background

You apply for a Master 2 diploma or an engineering degree in telecommunications. You are curious, open-minded, passionate about new technologies and have real interpersonal skills to integrate you into an innovative and multicultural environment.

Specific knowledge

Large Language Models (GPT4All, llama.cpp, …)
Python (PrivateGPT, LangChain, SentenceTransformers, Faiss, …)

Back to top

Joint Radiocommunication and Sensing in 6G

Internship: 5/6 months / Preferable date: February 2024

Internship subject

In the last few years, Canon has been diversifying its activities to address new professional markets such as security cameras, industry 4.0 or medical field. Developing 5G technology has enabled new application opportunities for Canon's high-quality video equipment which would be further enhanced by the 6G to come. This internship offers a new challenge to participate to the evolution of the connecting world through the 6G mobile technology.
Over the past years, the 3GPP (Third Generation Partnership Project) standard has released the latest generation of mobile communications referred to the 5G-New Radio Access Technology in 2019 with release 15 set of specifications. Next, new set of specifications, Release 16 and Release 17 have been delivered by 3GPP to add the enhanced functionalities and increased performance to support advanced use cases.
The 5G-Advanced standardization under development is the last step before the 6G standardization by 2028. Therefore, the race towards the 6G is now on with many R&D challenges to overcome such as Joint Radiocommunication and Sensing to meet expectations.
The purpose of the internship is to evaluate the capabilities of sensing operations in a future 6G system and their impact on the 6G radiocommunications performances.

Mission

Therefore, your main responsibility will be to provide:

Academic background

You apply for a Master 2 diploma or an engineering degree in the field of Electrical Engineering/Communication Systems. You are curious, open-minded, passionate about new technologies and have real interpersonal skills to integrate an innovative and multicultural environment.

Specific knowledge

Back to top

Generative AI models for Medical Imaging

Internship: 5/6 months / Preferable date: February 2024

Internship subject

Use of Generative Artificial Intelligence (GAI) is expanding in many domains, including medical imaging. Key benefit of GAI models is their ability to create new contents that is not limited to the training datasets they were trained on. Main applications in medical imaging include the generation of synthetic data (data augmentation for Medical Image Datasets), image to image translation (style transfer) and image enhancement. Recently, an open source platform for the development of generative models, the so-called “MONAI Generative Models”, has been released by a consortium of researchers [1][2]. The platform implements state-of-the-art model architectures such as diffusion models, autoregressive transformers and Generative Adversarial Networks (GANs). The capability of the platform has been demonstrated for some above applications and some imaging modalities including Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and X-Ray. However, on one hand, none of the projects address UltraSound Imaging (USI) data, and, on the other hand, more controllable image generation should be further developed, especially considering additional text-based inputs.
The internship subject is to conduct experiments for USI data using the MONAI Generative Models package and eventually study how text-based embedding could be included into the image generation process.

Mission

In a first phase (review and experimentations), the objective of the internship will be to study in details the capabilities of the MONAI Generative Models package (from both scientific papers, from [1], and software, installing and running some of the projects/experiments of the package [2] on a GPU cluster perspectives). More specifically, goal will be to design new experiments for 2D USI data generation and image to image translation comparing a selected set of models implemented on the platform such as Latent Diffusion Model and ControlNet.
In a second phase (more research oriented), the goal will be to study text-based embedding image generation providing users with control over the generated image content (as described in scientific papers from [3]), linking text-to-image synthesis to (ultrasound) image-to-image translation/enhancement domain in a text-guided image translation tasks, where an input image guides the layout and the text guides the perceived semantics and appearance of the ultrasound image. The objective will be to propose a first framework for future evaluation of a text-guided ultrasound image generative model.
At the end of the training period, the intern should have acquired good knowledge in the field of generative AI, its application to Medical Imaging as well as practical usage of mainstream Python Deep Learning framework (Pytorch).

Academic background

You apply for a Master 2 diploma or an engineering degree in telecommunications. You are curious, open-minded, passionate about new technologies and have real interpersonal skills to integrate you into an innovative and multicultural environment.

Specific knowledge

AI, Computer Science and/or Image Processing
Python

Back to top

AI-based 3D compression

Internship: 5/6 months / Preferable date: February 2024

Internship subject

MPEG (Moving Picture Expert Group) is currently developing a new compression standard (VDMC: Video Dynamic 3D Mesh Compression) proposing high compression performances for dynamic 3D mesh compression. This progressive compression approach is based on “conventional” (non-AI) compression technologies. In parallel, a MPEG call for evidence has succeeded in proving the efficiency of AI (using Deep Learning generative architectures) for 3D point cloud compression.

The internship subject is to study (review and first implementations) of some AI algorithmic tools for 3D compression for either mesh or point cloud data. In particular, 2 possible studies can be envisioned:

Mission

Depending on the study to be addressed (to be discussed with the intern), the objectives are the following:

As for Deep Octree Coding:

As for generative approaches/architectures used for point cloud to 3D mesh compression: At the end of the training period, the intern should have acquired good knowledge as well as practical experience in the field of AI for data compression.

[1] Neural Progressive Meshes, https://arxiv.org/pdf/2308.05741.pdf (to be published in SIGGRAPH’23)
[2] Point Cloud Geometry Compression using Sparse Tensor-based Multiscale Representation, ISO/IEC JTC 1/SC 29/WG 7 m59035, January 2022

Academic background

You apply for a Master 2 diploma or an engineering degree in telecommunications. You are curious, open-minded, passionate about new technologies and have real interpersonal skills to integrate you into an innovative and multicultural environment.

Specific knowledge

AI, Computer Science, Data Compression
Python, C++

Back to top

AI inference model conversion for camera embedded application

Internship: 5/6 months / Preferable date: February 2024

Internship subject

Over the past years, the embedded processing capabilities of network cameras designed by Canon (Axis brand) have been constantly increasing, with the integration of processing hardware acceleration, including Deep Learning Processing Unit (DLPU) such as the Google Edge TPU, opening the door to new analytics leveraging Artificial Intelligence inference embedded in the camera.
In parallel, that camera firmware now offers the possibility, for final users, to customize their cameras by developing and deploying their own application. For Axis cameras, this is performed through the Axis Camera Application Platform (ACAP) Software Development Kit (SDK).
ACAP Embedded application can monitor and control several features of the camera, but also perform deep learning assisted image processing, with good performances, leveraging the device deep learning hardware acceleration. Some main stream image processing models are available for direct use on the camera, but the ACAP SDK documentation also provides information on the process to be applied to convert an AI model developed on TensorFlow to be able to execute it on an Edge TPU.
Such embedded image processing is applied, for instance, in the domains of Intelligent transport System (ITS), or environment monitor (forest fire early detection).

Mission

A first objective of the internship will be to test and get familiar with the Axis ACAP SDK and examples for converting an AI inference model for embedded inference on the camera.
A second step will consist in adapting this process to several other pretrained AI inference models, and evaluate the characteristics (model architecture, input size, …) required for a pretrained model to be successfully adapted for camera embedded inference. A short study of the different embedded inference acceleration hardware (Edge_tpu vs DLPU for instance) might also be included.

Academic background

You apply for a Master 2 diploma or an engineering degree in telecommunications. You are curious, open-minded, passionate about new technologies and have real interpersonal skills to integrate you into an innovative and multicultural environment.

Specific knowledge

Tools: TensorFlow, TensorFlow lite, Axis ACAP SDK, Docker
Programming languages: python, C/C++
Environnement: Linux, Git

Back to top