Fig 6 - uploaded by Md. Haider Ali
Content may be subject to copyright.
Canny Edged (left), Contour Detection (Right).

Canny Edged (left), Contour Detection (Right).

Source publication
Chapter
Full-text available
Computer vision has been proven a remarkable entity in modern computer science. Different applications of this field have been used on a regular basis. In this paper, we propose a new model for tracking real time object (i.e., car, human face, interior objects, arms, etc.) from video feed by providing training with HAAR features and also with the i...

Contexts in source publication

Context 1
... we have detected the Canny-Edges of the binary processed detected area containing the object(s). The purpose of doing Canny Edge detection is to detect the strong edges in each frame. Several key processing has been done, reduction of noise, finding intensity of the gradient images, non-maximum suppression and Hysteresis Thresholding [13]. Fig. 6 (left) shows the detection of the strong edges of the gray scaled image; Fig. 6 (right) shows the detected contours, while finding the intensity gradient of the image Sobel kernel has been used in order to get the first derivative along the horizontal axis and along the vertical axis. The equation [13] of the edge gradient and direction is ...
Context 2
... the object(s). The purpose of doing Canny Edge detection is to detect the strong edges in each frame. Several key processing has been done, reduction of noise, finding intensity of the gradient images, non-maximum suppression and Hysteresis Thresholding [13]. Fig. 6 (left) shows the detection of the strong edges of the gray scaled image; Fig. 6 (right) shows the detected contours, while finding the intensity gradient of the image Sobel kernel has been used in order to get the first derivative along the horizontal axis and along the vertical axis. The equation [13] of the edge gradient and direction is provided in (5) and (6), ...

Similar publications

Preprint
Full-text available
Humans are very good at directing their visual attention toward relevant areas when they search for different types of objects. For instance, when we search for cars, we will look at the streets, not at the top of buildings. The motivation of this paper is to train a network to do the same via a multi-task learning approach. To train visual attenti...

Citations

Conference Paper
Vision-based object tracking is crucial for both civil and military applications. A range of hazards to cyber safety, vital infrastructure, and public privacy are posed by the rise of drones, or unmanned aerial vehicles (UAV). As a result, identifying and tracking suspicious drones/UAV is a serious challenge and that has attracted strong research attention recently. The focus of this research is to develop a new virtual coloured marker based tracking algorithm for estimating the posture of the detected object. After detection, the developed algorithm initiates by determining the coloured area of the detected object as reference-contour. Followed by a Virtual-Bounding Box (V-BB) created over the reference-contour by meeting the minimum area of contour criteria. Additionally, a Virtual Dynamic Crossline with a Virtual Static Graph (VDC-VSG) was developed to track the movement of V-BB, which is considered as virtual coloured marker, helps to estimate the pose of the detected object during observations. Moreover, V-BB helps to avoid ambient illumination-related difficulties during tracking process. A significant number of aerial sequences, including benchmark footage, were tested using the proposed approach, and the outputs were highly encouraging, with satisfactory results. Potential applications of the proposed method includes object detection and analysis applied to the field of security and defence.
Article
Full-text available
Variations in the quantity of plankton impact the entire marine ecosystem. It is of great significance to accurately assess the dynamic evolution of the plankton for monitoring the marine environment and global climate change. In this paper, a novel method is introduced for deep-sea plankton community detection in marine ecosystem using an underwater robotic platform. The videos were sampled at a distance of 1.5 m from the ocean floor, with a focal length of 1.5–2.5 m. The optical flow field is used to detect plankton community. We showed that for each of the moving plankton that do not overlap in space in two consecutive video frames, the time gradient of the spatial position of the plankton are opposite to each other in two consecutive optical flow fields. Further, the lateral and vertical gradients have the same value and orientation in two consecutive optical flow fields. Accordingly, moving plankton can be accurately detected under the complex dynamic background in the deep-sea environment. Experimental comparison with manual ground-truth fully validated the efficacy of the proposed methodology, which outperforms six state-of-the-art approaches.