Kolmogorov AI Framework | Part 1

Shannon Entropy Concept In 1948, engineer and mathematician Claude Shannon published a foundational paper for computer science, and later artificial intelligence: A Mathematical Theory of Communication. This article defines a central idea in the training of current algorithms: information entropy. $$ H = -\sum_{i = 1}^{n} p_i \log_2(p_i) $$ This formula allows us to quantify how random or organized a data source, such as a text-generating program, is. The higher the entropy, the more random the source; the lower the entropy, the more the data consists of recognizable patterns that allow us to predict the next words. ...

March 13, 2025 · 23 min · 4885 words · Julien Seveno

Physics Informed Neural Networks

Introduction Neural networks require large amounts of data to converge. Those data need to represent the task the neural network is trying to learn. Data collection is a tedious process, especially when collecting data can be difficult or expensive. In science, physics for instance, many phenomenon are described using theories that we know are working very well. Using those data as regularization can help neural networks generalize better with less data. ...

November 4, 2024 · 8 min · 1652 words · Julien Seveno

LoRA: Low Rank Adaptation

Introduction Whenever we need to use a deep learning model for image or text generation, classification or any other task, we have two possibilities: Train the model from scratch Use a pre-trained model and fine tune it for the task we need. Training a model from scratch can be challenging and requires computational resources, time, and sometimes large quantity of data. On the other hand, using a pre-trained model is easier but might require some adaptation to the new task. ...

May 24, 2024 · 6 min · 1138 words · Julien Seveno

Le problème des vendeurs de plage

Pourquoi est-ce que ni l’extrême droite, ni l’extrême gauche ne gagne jamais l’élection présidentielle ? On pourrait penser que la courbe du nombre de votants en fonction du spectre politique ressemble à une courbe gaussienne centrée sur le milieu de l’échiquier politique, avec quelques variations. Il y a plus de gens modérés que d’extrêmes. Mais si l’on regarde les intentions de vote pour chaque candidat ou les résultats du premier tour de l’élection présidentielle, on réalise rapidement que cette réponse n’est pas satisfaisante. ...

January 31, 2024 · 10 min · 2022 words · Julien Seveno

Ternary Computing

Introduction First, let us define “computers” as any machine that can compute. At first, analog machines were used. For example, electronic amplifiers can be used to perform integration, differentiation, root extraction, compute logarithm. Analog computers played a significant role in history, but they have the disadvantage that noise can perturb the computation and lead to errors. Digital computers later replaced analog ones and became mainstream. Any voltage within a range can be interpreted as a specific value. This makes digital computers much more resilient to errors. ...

January 31, 2024 · 15 min · 3081 words · Julien Seveno

Code injection

Introduction J’ai découvert l’OpenSource Intelligence en m’intéressant de près à la sécurité informatique. Lorsque des pirates préparent une attaque sur un système d’information (SI), ils ont besoin d’avoir autant d’informations que possible afin de mener à bien les opérations. En effet, les systèmes d’exploitation (OS), les serveurs utilisés, les versions de ces serveurs et de ces OS conditionnent énormément la manière dont va se dérouler l’attaque. ...

January 24, 2024 · 10 min · 2014 words · Julien Seveno