BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Asia/Kolkata
X-WR-TIMEZONE:Asia/Kolkata
BEGIN:VEVENT
UID:40@cds.iisc.ac.in
DTSTART;TZID=Asia/Kolkata:20240301T150000
DTEND;TZID=Asia/Kolkata:20240301T160000
DTSTAMP:20240221T085040Z
URL:https://cds.iisc.ac.in/events/m-tech-research-thesis-colloquium-cds-sc
 alable-read-alignment-algorithm-for-cyclic-pangenome-graphsscalable-read-a
 lignment-algorithm-for-cyclic-pangenome-graphs/
SUMMARY:M.Tech Research Thesis {Colloquium}: CDS : "Scalable Video Data Man
 agement and Visual Querying for Autonomous Camera Networks"
DESCRIPTION:DEPARTMENT OF COMPUTATIONAL AND DATA SCIENCES\nM.Tech Research 
 Thesis Colloquium\n\n\n\nSpeaker : Ms. Bharati Khanijo\n\nS.R. Number : 06
 -18-02-10-12-19-1-17219\n\nTitle :"Scalable Video Data Management and Visu
 al Querying for Autonomous Camera Networks"\nResearch Supervisor: Prof. Yo
 gesh Simmhan\nDate &amp\; Time : March 01\, 2024 (Friday) at 03:00 PM\nVen
 ue : # 102 CDS Seminar Hall\n\n\n\nABSTRACT\n\nVideo data has been histori
 cally known for its unstructured nature and rich semantic content but also
  for scalability issues in terms of storage and analytics. Mobile aerial p
 latforms like drones capture such videos across space and time. Advances i
 n computer vision and deep learning enable automatic extraction of rich se
 mantic information from video data\, leading to applications where the sto
 red video data can be used to study and analyze the world retrospectively 
 and automatically. However\, recent research has highlighted the compute-i
 ntensive nature of such Deep Neural Network (DNN) models\, e.g.\, for accu
 rate object detection\, leading to high computing costs that limits their 
 applicability for brute-force analysis of all historical videos. Also\, an
  efficient design of such applications often requires co-analysis of video
  data along with associated geospatial and temporal metadata\, which is a 
 challenge.\n\nWe propose a geospatial-temporal video query system with sup
 port for semantic queries for drone videos\, extending an existing spatial
 -temporal database and contemporary object detection models. We develop a 
 heuristic to enable better reuse of semantic object detections obtained fr
 om different configurations (object detection model and its input resoluti
 on) . The system further motivates the need for optimizations for retrospe
 ctive semantic analysis and storage for drone videos\, which is addressed 
 by our novel DDownscale method and the associated ingest pipeline.\n\nPrio
 r optimizations on semantic querying over video data focus on static camer
 as from city-scale traffic/surveillance camera networks\, often leveraging
  the spatial and temporal characteristics of associated videos\, which are
  absent in videos recorded by mobile drone cameras. We specifically focus 
 on two such characteristics of drone videos. One is that drone videos have
  shorter durations\, unlike those captured by static cameras. Another is t
 hat there can be large variations in the level of detail of information ca
 ptured across a fleet of drone cameras due to differences in the resolutio
 n of the camera\, the altitude\, and the orientation from which the videos
  were captured.\n\nSpecifically\, we address the need to intelligently sca
 le-down the spatial resolution of videos to reduce the video storage costs
  and semantic query/inferencing time. However\, conventional methods of ma
 nual or profiling-based estimation of the ideal scaling ratio are compute-
 intensive and/or time consuming for such heterogeneous feeds. We propose D
 Downscale\, a novel method to dynamically select the downscale factor for 
 a video by utilizing the information on the object size in the video. We m
 odel the downscale factor and associated drop in relative recall due to do
 wnscaling as a function of object size in the downscaled video and demonst
 rated that for a given DNN model and class of interest\, DDownscale genera
 lizes well to the evaluated datasets. A DDownscale inequality between the 
 relative recall drop and the hyper-parameters of the method is derived. Th
 is satisfies 98% of the dynamically downscaled videos across datasets\, ob
 jects of interest and parameters. The algorithm achieve over 19% reduction
  in total object detection time and 24% reduction in storage on average co
 mpared to the baseline of storing/inferencing at the original resolution \
 , for different user-specified target reduction in recall values ranging f
 rom 1--30%\, and 96% of the downscaled videos are within the target recall
  drop.\n\nA simpler specification at the time of ingest of target level of
  detail (average ground spatial distance) captured in the video and the ha
 rmonic mean of relative recall drop for the class of smallest object of in
 terest and selected object detection model was derived using the above mod
 eling to aid in the selection of a target level of detail. Additionally\, 
 we develop an ingest pipeline that reduces the time to ingest drone videos
  using this dynamically downscaling method over heterogeneous edge acceler
 ators\, and reduce the average turnaround time to ingest data from multipl
 e clients by ~ 66%\, despite the downscaling time overhead\, compared to u
 ploading original resolution video without downscaling.\n\n\n\nALL ARE WEL
 COME
CATEGORIES:Events,MTech Research Thesis Colloquium
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Kolkata
X-LIC-LOCATION:Asia/Kolkata
BEGIN:STANDARD
DTSTART:20230302T150000
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
TZNAME:IST
END:STANDARD
END:VTIMEZONE
END:VCALENDAR