Ming-Ching Chang image

Ming-Ching Chang, Ph.D. [CV] [Google Scholar] [DBLP]
Ming-Ching Chang
Assistant Professor, Department of Computer Science
College of Engineering and Applied Sciences
University at Albany, State University of New York
Phone: (518) 442-5085
Email: mchang2 AT albany DOT edu
Office: LI-090A, 1400 Western Avenue, Albany, NY 12222, USA
Lab: UA CVML lab

Dataset and Software for Research Use


  1. [Under Construction]
  2. Street Object Detection / Tracking for AI City Challenge
  3. Our team won an honorary mention award for the NVIDIA AI City Challenge in the IEEE Smart City Conference 2017. Our code for this challenge is uploaded on Github (https://github.com/NVIDIAAICITYCHALLENGE/AICity_SunyAlbany). The code consists of two parts: (1) street object detection (vehicles. person, traffic signal, etc.) implemented with Google object detection API, (2) Hypergraph based multi-object tracking based on the following publication: Longyin Wen, Wenbo Li, Junjie Yan, Zhen Lei, Dong Yi, Stan Z. Li; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1282-1289]

    Please refer to our paper and use the following citations if you use this code:

    • inproceedings{Yi:etal:AICity2017,
      author = {Yi Wei and Nenghui Song and Lipeng Ke and Ming-Ching Chang and Siwei Lyu},
      title
      = {Street Object Detection / Tracking for AI City Traffic Analysis}, 
      booktitle = {IEEE Smart World Congress},
      city = {San Jose, CA},
      country = {USA},
      year = {2017},
      }

    • inproceedings{aicity2017,
      author = {Milind Naphade and David C. Anastasiu and Anuj Sharma and Vamsi Jagrlamudi and Hyeran Jeon and Kaikai Liu and Ming-Ching Chang and Siwei Lyu and Zeyu Gao},
      title = {The {NVIDIA} {AI} {C}ity {C}hallenge},
      booktitle = {IEEE SmartWorld, Ubiquitous Intelligence \& Computing, Advanced \& Trusted Computed, Scalable Computing \& Communications, Cloud \& Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)},
      OPTseries = {SmartWorld},
      year = {2017},
      OPTpublisher = {IEEE},
      OPTaddress = {Piscataway, NJ, USA},
      location = {East Bay Silicon Valley, CA, USA},
      }


  4. UA-DETRAC: A New Benchmark and Protocol for Multi-Object Tracking
  5. UA-DETRAC is a challenging real-world multi-object detection and multi-object tracking benchmark. The dataset is extracted from 10 hours of videos captured with a Cannon EOS 550D camera at 27 different locations in Beijing and Tianjin, China. The videos are recorded at 25 frames per seconds (fps), with resolution of 960×540 pixels. There are more than 140 thousand frames in the UA-DETRAC dataset and 8250 vehicles that are manually annotated, leading to a total of 1.21 million labeled bouding boxes of objects. We also perform benchmark tests of state-of-the-art methods in object detection and multi-object tracking, together with evaluation metrics and the corresponding evaluation website.

    Please use the following citation for this work

    • @inproceedings{long_etal_axrive15,
      title={DETRAC: A New Benchmark and Protocol for Multi-Object Tracking},
      author={Longyin Wen and Dawei Du and Zhaowei Cai and Zhen Lei and Ming-Ching Chang and Honggang Qi and Jongwoo Lim and Ming-Hsuan Yang and Siwei Lyu},
      year={2015},
      }


  6. [Under Construction]
 Ming-Ching Chang download