Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ViSP community results obtained with ViSP #1053

Open
fspindle opened this issue Apr 6, 2022 · 16 comments
Open

ViSP community results obtained with ViSP #1053

fspindle opened this issue Apr 6, 2022 · 16 comments
Labels
community Results from ViSP community

Comments

@fspindle
Copy link
Contributor

fspindle commented Apr 6, 2022

This thread was created to allow all ViSP users to post videos of results obtained in research, industrial, European or other projects...

It completes the videos that the team regularly publishes on the vispTeam Youtube channel and the Rainbow team's channel.

To contribute to this thread, you can indicate the name of your laboratory, company, entity, add your video or a link to the video and a short description of the video.

Feel free to contribute to this thread to promote ViSP usage.

@fspindle fspindle added the community Results from ViSP community label Apr 6, 2022
@s-trinh
Copy link
Contributor

s-trinh commented Apr 6, 2022

Following video shows the results of the dISAS demonstrator for the H2020 PULSAR project, and whom Magellium was the project coordinator.
The dISAS demonstrator aims to demonstrate and provide simulation tools for large-scale, in-space assembly operations.
The following video has been presented during the IROS 2020 RISAM workshop:

The ViSP library has been used for:

  • AprilTag fiducial markers detection
  • camera pose computation from 2D/3D points (Perspective-N-Points method)
  • pose refinement using the Virtual Visual Servoing (VVS) framework
  • servo of the robotic arm for the docking between the mirror tile and the robot end-effector, using advanced control scheme such as 2D 1/2 visual servoing, continuous sequencing, secondary task, and joint limit avoidance (see this VS tutorial)

Watch the video


Final project video:

More in-depth details and results about the dPAMT demonstrator (precise assembly of mirror tiles):

Underwater demonstrator (dLSAFFE) for large scale robotic manipulation and simulated micro-gravity environment:

@fspindle fspindle pinned this issue Apr 7, 2022
@florent-lamiraux
Copy link
Contributor

florent-lamiraux commented Apr 8, 2022

The following video demonstrates an accurate pointing task by a UR 10 robot on a mock-up of aircraft part. This work was performed in the framework of the joint lab ROB4FAM between Airbus and CNRS, in collaboration with INRIA. The demonstration includes

https://peertube.laas.fr/videos/watch/9632ae06-2466-46cf-9d4d-6f45ee8b4d91

Reactive.and.accurate.pointing.of.holes.in.a.part.by.localization.using.vision-1080p.mp4

@florent-lamiraux
Copy link
Contributor

florent-lamiraux commented Apr 8, 2022

The following video demonstrates deburring operations performed by a mobile robot Tiago on a mock-up of aircraft pylon. This work was performed in the framework of the joint lab ROB4FAM between Airbus and CNRS. The demonstration includes:

https://peertube.laas.fr/videos/watch/6f40ea79-abcd-490e-a616-3a67bf297d93

Tiago.mobile.manipulator.performs.deburring.tasks.on.an.aircraft.part-540p.mp4

@oKermorgant
Copy link
Contributor

oKermorgant commented Apr 19, 2022

The following video is associated with the paper Integrating Features Acceleration in Visual Predictive Control, published in Robotics and Automation Letters.

  • ViSP was used for image processing and overall linear algebra
  • the actual MPC computation relies on Pinocchio
2020-Fusco_RAL.mp4

@GuicarMIS
Copy link

GuicarMIS commented Apr 22, 2022

The following video is associated with the paper Defocus-based Direct Visual Servoing, published in the IEEE Robotics and Automation Letters in 2021.

ViSP's vpLuminance visual information was used and extended to consider the defocus variation in the control law (interaction matrix with the Laplacian of the image)

Defocus-based.Direct.Visual.Servoing.mp4

@s-trinh
Copy link
Contributor

s-trinh commented Jun 15, 2022

Model-based tracking with TORO for visual navigation

The following video is a screen capture showing the model-based tracking of an aircraft panel to allow the TORO humanoid robot to reach a predefined pose.
This video has been recorded during an integration session at DLR RMC, and for the Comanoid H2020 project.

Model-based_tracking_with_TORO_Comanoid.mp4

Model-based tracking with TORO for bracket grasping

The following video is also a screen capture showing the model-based tracking of a bracket feeeder to allow the TORO humanoid robot grasp different brackets.
This video has been recorded during an integration session at DLR RMC, and for the Comanoid H2020 project.

Model-based_tracking_with_TORO_bracket_grasping_Comanoid.mp4

@s-trinh
Copy link
Contributor

s-trinh commented Jun 15, 2022

The three following videos show eye-in-hand visual servoing to perform the assembly of an in-space primary mirror in simulation, and for the PULSAR H2020 project.
The versatility of the ViSP library is demonstrated:

  • with robust and accurate fiducial markers detection and pose computation,
  • and to servo a robotic arm (7+1 dof revolute and prismatic joints) using eye-in-hand configuration.
Eye-in-Hand_Visual_Servoing_PULSAR_1.mp4
Eye-in-Hand_Visual_Servoing_PULSAR_2.mp4
Eye-in-Hand_Visual_Servoing_PULSAR_3.mp4

@fspindle
Copy link
Contributor Author

This video shows results obtained with ViSP for a join work between COSMER lab and PRAO team at Ifremer.
They mainly used the matrix representation part of VISP and visual-servoing with home-made matrices and features to control a ROV.

Test.asservissement.visuel.sur.une.chainette.avec.le.bluerov.1.mp4

@TAMADAZTE
Copy link

The following video shows a highly accurate automatic positioning task ( accuracy of few tens of nanometers) performed in a microrobotic workcell using a direct visual servoing method using photometry [1]. The visual system consisted of a high magnification optical microscope, while the robotic system was a lab-made microrobotic positioning platform. The whole implemented within ViSP framework.

[1]: Christophe Collewet, Eric Marchand. Photometric visual servoing. IEEE Transactions on Robotics, IEEE, 2011, 27 (4), pp.828-834.

https://youtu.be/ogRmQJqikQ8

video_icra_2011.mp4

@TAMADAZTE
Copy link

TAMADAZTE commented Jun 30, 2022

The following multimedia illustrates a 6-DoF positioning task achieved using wavelets coefficients-based direct visual servoing. Indeed, instead of using geometric visual features in standard vision-based approaches, this controller makes use of wavelet coefficients as control signal inputs. It uses the multiple resolution coefficients representing the wavelet transform of an image in the spatial domain. The implementation was done using ViSP and the experimental evaluation was performed in different conditions of use (nominal conditions, using 2D/3D scenes, under lighting variation, and with partial occlusions).

Icra2016_c.mp4

@TAMADAZTE
Copy link

TAMADAZTE commented Jun 30, 2022

The multimedia below illustrates a weakly calibrated three-view based visual servoing control law for laser steering in the context of surgical procedure (the final aim). It proposes to revisit the conventional trifocal constraints governing a three-view geometry for a more suitable use in the design of an efficient trifocal vision-based control. Thereby, an explicit control law is derived, without any matrix inversion or complex matrices manipulation.

Different ViSP functions were used, e.g., the vpDot visual tracker from ViSP was used to track the laser spot.

video_ijrr.mp4

@TAMADAZTE
Copy link

The following video shows the automatic control of a laser spot using a visual servoing approaches. This work was developed in the context of minimally invasive surgery of the middle ear by burring residual pathological tissues called cholesteatoma. The whole approach consisted of the association of an optimal path generation method based of the well-known "Traveling Salesman Problem" and an image-based visual servoing to treat the residual cholesteatoma that look like debris spread all over the middle ear cavity.

ViSP was used both for the laser spot and cholesteatoma visual tracking.

ieee_iros_2022.mp4

@TAMADAZTE
Copy link

The video below illustrates a path-following method for laser steering in the context of vocal folds surgery. In this work, non-holonomic control of the unicycle model is used to implement velocity-independent visual path following for laser surgery. The developed controller was tested, in simulation as well as experimentally in several conditions of use: different initial velocities (step input, successive step inputs, sinusoidal inputs), optimized/non-optimized gains, time-varying path (simulating a patient breathing), and complex curves with curvatures. Also, the experiments were performed at 587 Hz showed an average accuracy lower than 0.22 pixels (≈ 10µm) with a standard deviation of 0.55 pixels (≈ 25µm) path following, and a relative velocity distortion of less than 10^−6%.

t-ro_video.mp4

@TAMADAZTE
Copy link

The following video shows the operating of a vision-based control law to achieve automatic 6-DoFs positioning tasks. The objective of this work was to be able to replace a biological sample under an optical device for a non-invasive depth examination at any given time (i.e., performing repetitive and accurate optical characterizations of the sample). The optical examination, also called optical biopsy, is performed thanks to an optical coherence tomography (OCT) system. The OCT device is used to perform a 3-dimensional optical biopsy, and as a sensor to control the robot motion during the repositioning process. The visual servoing controller uses the 3D pose of the studied biological sample estimated directly from the C-scan OCT images using a Principal Component Analysis (PCA) framework.

pca_multimedia.mp4

@s-trinh
Copy link
Contributor

s-trinh commented Jul 15, 2022

Model-based tracking with HRP-4 for circuit breaker manipulation

The following video shows on the left the model-based tracking of a circuit breaker, and on the right the model-based tracking of the HRP-4 hand. Thus, it allows servoing the robot hand and tool in order to reach a predefined pose w.r.t. one of the circuit breaker switch.
This video has been recorded during an integration session at LIRMM, and for the Comanoid H2020 project.

Model-based_tracking_with_HRP-4_Comanoid.mp4

Model-based tracking of a circuit breaker using edges+KLT+depth features

The following video shows the model-based tracking of a circuit breaker. The combination of edges, KLT and depth features allows for a stable and robust tracking and precise pose computation of the circuit breaker.
This video has been recorded during an integration session at LIRMM, and for the Comanoid H2020 project.

Model-based_tracking_of_a_circuit_breaker_Comanoid.mp4

Model-based tracking comparison between edges+KLT and edges+depth features in order to track a printer

The following video shows the model-based tracking of a printer. It compares the tracking and pose computation between edges+KLT features and edges+depth features. The use of depth features, thanks to an ASUS Xtion sensor, allows for a more stable and precise tracking and pose computation.
This video has been recorded during an integration session at LIRMM, and for the Comanoid H2020 project.

Model-based_tracking_of_a_printer_Comanoid.mp4

@eder1234
Copy link

eder1234 commented Sep 6, 2022

4-Points Visual Servoing implemented on a UAV

The following video shows a UAV reaching the desired position, from the current position, by applying Image-Based Visual Servoing. This Master's project was supervised by Guillaume Caron, David Lara and Claude Pégard.

Watch the video

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community Results from ViSP community
Projects
None yet
Development

No branches or pull requests

7 participants