Loading...

ÇмúÇà»ç

  >   ÇмúÇà»ç   >  

Speakers
¼¼¼Ç 1: Graph Neural Network (°í·Á´ëÇб³ ÄÄÇ»ÅÍÇаú ±èÇö¿ì ±³¼ö)
°í·Á´ëÇб³ Á¤º¸´ëÇÐ ÄÄÇ»ÅÍÇаú Á¶±³¼ö (2019 ~)
Amazon Lab126, Applied Scientist (2017 ~ 2019)
University of Wisconsin-Madison, ÄÄÇ»ÅÍ°úÇÐ ÀÌÇйڻç (2017)
±×·¡ÇÁ Àΰø½Å°æ¸Á(GNN)Àº ±×·¡ÇÁ ºÐ¼®À» À§ÇØ Æ¯È­µÈ Àΰø½Å°æ¸Á ±¸Á¶·Î ´Ù¾çÇÑ ºÐ¾ß¿¡ È°¿ëµÇ°í ÀÖÀ¸¸ç, ºñÀ¯Å¬¸®µå °ø°£ »ó¿¡ Á¸ÀçÇÏ´Â µ¥ÀÌÅ͸¦ È¿°úÀûÀ¸·Î ´Ù·ç±â À§ÇØ CNN, Æ®·£½ºÆ÷¸Ó µîÀ» ±×·¡ÇÁ »ó¿¡¼­ ÀçÇؼ®ÇÏ´Â »õ·Î¿î ¸ðµ¨ÀÌ Á¦¾ÈµÇ°í ÀÖ´Ù. ÀÌ °­¿¬¿¡¼­´Â GNNÀÇ ±âÃÊÀûÀÎ ¸ðµ¨À» Æ÷ÇÔÇÏ¿© ÃֽŠ¸ðµ¨ÀÇ ÀåÁ¡°ú ÇÑ°èÁ¡À» ³íÀÇÇÏ°í, ±×·¡ÇÁ µ¥ÀÌÅ͸¦ ´Ù·ç±â À§ÇÑ µö·¯´× ¸ðµ¨ ±¸Çö ½Ç½À°ú ´Ù¾çÇÑ ÀÀ¿ëÀ» ¼Ò°³ÇÑ´Ù.
 

¼¼¼Ç 2: GNNÀ» È°¿ëÇÑ Ãßõ ½Ã½ºÅÛ (¿¬¼¼´ëÇб³ ¼öÇаè»êÇкÎ(°è»ê°úÇаøÇÐ) ½Å¿ø¿ë ±³¼ö)
¿¬¼¼´ëÇб³ °è»ê°úÇаøÇаú ºÎ±³¼ö/±³¼ö (2019 ~)
´Ü±¹´ëÇб³ ÄÄÇ»ÅÍÇаú Á¶±³¼ö/ºÎ±³¼ö (2012 ~ 2019)
Harvard University Postdoctoral Fellow/Research Associate (2009 ~ 2012)
KAIST ÀüÀÚÀü»êÇаú °øÇйڻç (2008)
¿Â¶óÀÎ ¼­ºñ½ºÀÇ È®ÀåÀ¸·Î Å« ÁÖ¸ñÀ» ¹Þ°í ÀÖ´Â Ãßõ ½Ã½ºÅÛ Áß »ç¿ëÀÚ¿Í ¾ÆÀÌÅÛ °£ ¿¬°á °ü°è¸¦ È°¿ëÇÑ ±×·¡ÇÁ ½Å°æ¸Á ±â¹Ý Çù¾÷ ÇÊÅ͸µ ±â¼úÀÌ ³ôÀº Á¤È®µµ¿Í ÇÔ²² ÁÖ¸ñÀ» ¹Þ°í ÀÖ´Ù. ÀÌ °­¿¬¿¡¼­´Â ±×·¡ÇÁ Àΰø½Å°æ¸ÁÀ» È°¿ëÇÑ SOTA(state-of-the-art) Ãßõ ½Ã½ºÅÛ ±â¼úÀÇ °³¿ä¸¦ ¼³¸íÇÏ°í ¸ðµ¨ ÃÖÀûÈ­ ¹æ¾ÈÀ» ´Ù·é´Ù. ¶ÇÇÑ ÇöÀç ±×·¡ÇÁ Àΰø½Å°æ¸Á ±â¹Ý Ãßõ ½Ã½ºÅÛÀÇ ½ÇÁ¦ÀûÀΠ縰Áö¿¡ ´ëÇØ ³íÀÇÇÏ°í À̸¦ ÇØ°áÇϱâ À§ÇÑ ºÎÈ£ ÀÎÁö ±×·¡ÇÁ »ý¼º ±â¹ÝÀÇ »õ·Î¿î ±×·¡ÇÁ Àΰø½Å°æ¸ÁÀ» È°¿ëÇÏ´Â Ãßõ ½Ã½ºÅÛÀ» ¼Ò°³ÇÑ´Ù.
 

¼¼¼Ç 3: Neural ODE and Score-based Generative Model (¿¬¼¼´ëÇб³ ÄÄÇ»ÅÍ°úÇаú/ÀΰøÁö´ÉÇаú ¹Ú³ë¼º ±³¼ö)
¿¬¼¼´ëÇб³ ÄÄÇ»ÅÍ°úÇаú/ÀΰøÁö´ÉÇаú Á¶±³¼ö (2020 ~)

George Mason University, Information Sciences and Technology and Center for Secure Information Systems, Á¶±³¼ö(2018  ~ 2020)
University of North Carolina at Charlotte, Software and Information Systems, Á¶±³¼ö (2016 ~ 2018)
University of Maryland, ÄÄÇ»ÅÍ°úÇÐ ¹Ú»ç (2016)
Neural ODE´Â continuous-depth ¸ðµ¨À» ¼³°èÇϴµ¥ ÁÖ¿äÇÑ °³³äÀ¸·Î 2018³â ù ¼Ò°³ ÀÌÈÄ ±¤¹üÀ§ÇÏ°Ô È°¿ëµÇ°í ÀÖ´Ù. Neural ODEÀÇ ÇÙ½É °³³ä°ú adjoint sensitivity training method ¹× continuous normalizing flow¸¦ ¼³¸íÇÏ°í Neural ODE¸¦ È°¿ëÇÑ Ãßõ ¾Ë°í¸®Áò, ½Ã°è¿­ ¿¹Ãø µî ÃֽŠ¿¬±¸¸¦ ¿¹Á¦ ÄÚµå¿Í ÇÔ²² ¼Ò°³ÇÑ´Ù. ½ºÄÚ¾î ±â¹Ý »ý¼º¸ðµ¨Àº ¶Ù¾î³­ sampling quality¿Í diversity¸¦ º¸ÀÌ°í ÀÖ´Â ¸ðµ¨ÀÌ´Ù. »ý¼º °úÁ¤À» È®·ü¹ÌºÐ¹æÁ¤½ÄÀ¸·Î ¸ðµ¨¸µÇÏ´Â °úÁ¤°ú denoising score matching¿¡ ´ëÇÑ ±âº» °³³ä°ú ´Ù¾çÇÑ »ùÇøµ ¹æ¹ý, ±×¸®°í Å×À̺í ÇÕ¼º ºÐ¾ß¿¡¼­ ÃÖ°í ¼º´ÉÀ» º¸ÀÌ°í ÀÖ´Â ÃֽŠ¿¬±¸¸¦ ¼Ò°³ÇÑ´Ù.
 

¼¼¼Ç 4: Diffusion Probabilistic Models and Text-to-Image Generative Models (¼­¿ï´ëÇб³ ¼ö¸®°úÇкΠ·ù°æ¼® ±³¼ö)
¼­¿ï´ëÇб³ ¼ö¸®°úÇкΠÁ¶±³¼ö (2020 ~)
University of California, Los Angeles, Department of Mathematics, °âÀÓÁ¶±³¼ö (2016 ~ 2019)
Stanford University, Computational and Mathematical Engineering ¹Ú»ç (2016)
This tutorial overviews the recent development of diffusion probabilistic models and text-to-image generative models. We start with the theory of diffusion probabilistic models based on stochastic differential equations and then move on to conditional generation. We conclude with the recent text-conditioned diffusion models such as DALLE-2 and Stable Diffusion.