Visual Perception: Fundamental Geometry and Camera Basics

A series about Visual Perception Fundamentals like Camera Calibration, Epipolar Geometry, and their mathematical implementations from scratch

Yağmur Çiğdem Aktaş
6 min readOct 15, 2021

Visual perception is the human ability to interpret the surrounding environment. [1] In this series, we will learn the Visual Perception fundamentals for robotics without using any Artificial Visual Perception method but using classical approaches step by step.

You need to make sure you know some geometric concepts. Therefore, in the first chapter, we will touch on the mathematical concepts required for Classical Visual Perception methods and then we will see these methods step by step.

  • Fundamental Geometry and Camera Basics (starts right below 💃)
  • Camera Calibration
    1. DLT
    2. Zhang’s Calibration
  • Epipolar Geometry
    1. Essential Matrix
    2. Fundamental Matrix
    3. Triangulation
    4. Feature Matching
  • Visual Perception GUI
    A tool implemented using QT Creator, Opencv, and C++ where you can calibrate your camera and apply different Epipolar Geometry methods

Let’s start with the first section!

Fundamental Geometry and Camera Basics

In order to express the environment around us in geometric order, we assume that there is a coordinate system in the “world frame”

--

--