<?xml version='1.0' encoding='UTF-8'?><rss xmlns:dc='http://purl.org/dc/elements/1.1/' xmlns:geo='http://www.w3.org/2003/01/geo/wgs84_pos#' xmlns:media='http://search.yahoo.com/mrss/' version='2.0' xmlns:xCal='urn:ietf:params:xml:ns:xcal'><channel><title>Calendar - Department of Computer Science</title><link>https://events.rochester.edu/group/department_of_computer_science/calendar</link><description>Calendar - Department of Computer Science</description><lastBuildDate>Sun, 08 Mar 2026 16:12:42 -0400</lastBuildDate><ttl>60</ttl><language>en-us</language><generator>Localist</generator><item><title>Mar 16, 2026: PhD Thesis Defense: Chao Huang, Computer Science at Wegmans Hall</title><description><![CDATA[<p>Chao Huang, "Controllable Generative Modeling for Multimodal Perception and Synthesis" </p>

<p>Advisor: Prof. Chenliang Xu (Computer Science) </p>

<p>Committee: Prof. Jiebo Luo (Computer Science), Prof. Zhiyao Duan (Computer Science), Prof. Yapeng Tian (UT Dallas, Computer Science) </p>

<p>Chair: Prof. Lisha Chen (Electrical &amp; Computer Engineering) </p>

<p>Recent advances in generative modeling have reshaped machine perception, enabling systems that not only recognize multimodal signals but also synthesize and manipulate them. For example, modern models can generate photorealistic images or clone voices with high fidelity. However, despite this rapid progress, current multimodal models remain fundamentally limited: they often lack spatial grounding, struggle to disentangle overlapping sound sources in complex scenes, and offer only coarse mechanisms for user control. These limitations represent a key bottleneck in moving from passive recognition to active, intelligent content creation.</p>

<p>To address these limitations, this thesis introduces a unified framework for controllable generative modeling, bridging the gap between multimodal perception and synthesis. Specifically, we aim to solve three fundamental challenges: 1) how to ground generative models in physical and spatial reality, 2) how to leverage generative priors for high-fidelity audio–visual separation, and 3) how to implement precise, task-agnostic control mechanisms. </p>

<p>First, I establish perceptual foundations through Egocentric Audio-Visual Localization, which mod- els spatial cues via egomotion, and Acoustic Primitives, a joint-anchored representation of human soundfields. These demonstrate that physical grounding can be learned directly from dynamic multimodal streams. Second, I shift from discriminative separation to generative modeling with DAVIS, a diffusion-based conditional framework, and ZeroSep, a training-free separator leveraging text-to-audio generative priors. Finally, I introduce task-agnostic control mechanisms: VisAH uti- lizes language and video for expressive audio highlighting, while FreSca employs frequency-space manipulation to enable training-free steering across image and video generation. </p>

<p>In summary, the contributions of this thesis chart a coherent progression from physically grounded perception to controllable generative synthesis. The resulting systems can localize, separate, and manipulate multimodal content, laying the foundation for future multimodal agents capable of reasoning over and creating long-form, open-world content.</p>

<p><a href="https://events.rochester.edu/event/phd-thesis-defense-chao-huang-computer-science">View on site</a> | <a href="mailto:?subject=I+found+an+interesting+event%3A+PhD+Thesis+Defense%3A+Chao+Huang%2C+Computer+Science&amp;body=I+found+an+interesting+event+you+may+like%3A%0A%0A%0ADate%3A+Mar+16%2C+2026%0A%0ADescription%3A%0AChao+Huang%2C+%22Controllable+Generative+Modeling+for+Multimodal+Perception+and+Synthesis%22+%0A%0AAdvisor%3A+Prof.+Chenliang+Xu+%28Computer+Science%29+%0A%0ACommittee%3A+Prof.+Jiebo+Luo+%28Computer+Science%29%2C+Prof.+Zhiyao+Duan+%28Computer+Science%29%2C+Prof.+Yapeng+Tian+%28UT+Dallas%2C+Computer+Science%29+%0A%0AChair%3A+Prof.+Lisha+Chen+%28Electrical+%26+Computer+Engineering%29+%0A%0ARecent+advances+in+generative+modeling+have+reshaped+machine+perception%2C+enabling+systems+that+not+only+recognize+multimodal+signals+but+also+synthesize+and+manipulate+them.+For+example%2C+modern+models+can+generate+photorealistic+images+or+clone+voices+with+high+fidelity.+However%2C+despite+this+rapid+progress%2C+current+multimodal+models+remain+fundamentally+limited%3A+they+often+lack+spatial+grounding%2C+struggle+to+disentangle+overlapping+sound+sources+in+complex+scenes%2C+and+offer+only+coarse+mechanisms+for+user+control.+These+limitations+represent+a+key+bottleneck+in+moving+from+passive+recognition+to+active%2C+intelligent+content+creation.%0A%0ATo+address+these+limitations%2C+this+thesis+introduces+a+unified+framework+for+controllable+generative+modeling%2C+bridging+the+gap+between+multimodal+perception+and+synthesis.+Specifically%2C+we+aim+to+solve+three+fundamental+challenges%3A+1%29+how+to+ground+generative+models+in+physical+and+spatial+reality%2C+2%29+how+to+leverage+generative+priors+for+high-fidelity+audio%E2%80%93visual+separation%2C+and+3%29+how+to+implement+precise%2C+task-agnostic+control+mechanisms.+%0A%0AFirst%2C+I+establish+perceptual+foundations+through+Egocentric+Audio-Visual+Localization%2C+which+mod-+els+spatial+cues+via+egomotion%2C+and+Acoustic+Primitives%2C+a+joint-anchored+representation+of+human+soundfields.+These+demonstrate+that+physical+grounding+can+be+learned+directly+from+dynamic+multimodal+streams.+Second%2C+I+shift+from+discriminative+separation+to+generative+modeling+with+DAVIS%2C+a+diffusion-based+conditional+framework%2C+and+ZeroSep%2C+a+training-free+separator+leveraging+text-to-audio+generative+priors.+Finally%2C+I+introduce+task-agnostic+control+mechanisms%3A+VisAH+uti-+lizes+language+and+video+for+expressive+audio+highlighting%2C+while+FreSca+employs+frequency-space+manipulation+to+enable+training-free+steering+across+image+and+video+generation.+%0A%0AIn+summary%2C+the+contributions+of+this+thesis+chart+a+coherent+progression+from+physically+grounded+perception+to+controllable+generative+synthesis.+The+resulting+systems+can+localize%2C+separate%2C+and+manipulate+multimodal+content%2C+laying+the+foundation+for+future+multimodal+agents+capable+of+reasoning+over+and+creating+long-form%2C+open-world+content.%0A%0Ahttps%3A%2F%2Fevents.rochester.edu%2Fevent%2Fphd-thesis-defense-chao-huang-computer-science%0A">Email this event</a></p>]]></description><guid isPermaLink='false'>tag:localist.com,2008:EventInstance_52081834287213</guid><geo:lat>43.126069</geo:lat><geo:long>-77.629191</geo:long><pubDate>Mon, 16 Mar 2026 09:00:00 -0400</pubDate><dc:date>2026-03-16T09:00:00-04:00</dc:date><link>https://events.rochester.edu/event/phd-thesis-defense-chao-huang-computer-science</link><media:content medium='image' url='https://localist-images.azureedge.net/photos/42162424112821/huge/3a1747893cb0cd8b7ed2511c420aa85087481b3e.jpg'/></item><item><title>Apr 20, 2026: CS Seminar Series: Shriram Krishnamurthi at Wegmans Hall</title><description><![CDATA[<p>The Cognitive and Human Factors of Formal Methods</p>

<p> </p>

<p>Abstract:</p>

<p>As formal methods improve in expressiveness and power, they create new opportunities for non-expert adoption. In principle, formal tools are now powerful enough to enable developers to scalably validate realistic systems artifacts without extensive formal training. However, realizing this potential for adoption requires attention to not only the technical but also the human side—which has received extraordinarily little attention from formal-methods research.</p>

<p> </p>

<p>This talk presents some of our efforts to address this paucity. We apply ideas from cognitive science, human-factors research, and education theory to improve the usability of formal methods. Along the way, we find misconceptions suffered by users, how technically appealing designs that experts may value may fail to help, and how our tools may even mislead users.</p>

<p> </p>

<p>Bio:</p>

<p>Shriram Krishnamurthi is a Professor of Computer Science at Brown University. With collaborators and students, he has created several influential systems like DrRacket, Margrave, Flapjax, LambdaJS, Flowlog, and Pyret. He has also written multiple widely-used books. He also co-directs the Bootstrap integrated computing outreach program. For his work he has received SIGPLAN's Robin Milner Young Researcher Award, SIGPLAN's Software Award (jointly), SIGSOFT's Influential Educator Award, SIGPLAN's Distinguished Educator Award (jointly), and Brown's Wriston and Philip J. Bray teaching awards. He has authored over twenty papers recognized for honors by program committees. He has an honorary doctorate from the Università della Svizzera Italiana.</p>

<p><a href="https://events.rochester.edu/event/cs-seminar-series-shriram-krishnamurthi">View on site</a> | <a href="mailto:?subject=I+found+an+interesting+event%3A+CS+Seminar+Series%3A+Shriram+Krishnamurthi&amp;body=I+found+an+interesting+event+you+may+like%3A%0A%0A%0ADate%3A+Apr+20%2C+2026%0A%0ADescription%3A%0AThe+Cognitive+and+Human+Factors+of+Formal+Methods%0A%0A+%0A%0AAbstract%3A%0A%0AAs+formal+methods+improve+in+expressiveness+and+power%2C+they+create+new+opportunities+for+non-expert+adoption.+In+principle%2C+formal+tools+are+now+powerful+enough+to+enable+developers+to+scalably+validate+realistic+systems+artifacts+without+extensive+formal+training.+However%2C+realizing+this+potential+for+adoption+requires+attention+to+not+only+the+technical+but+also+the+human+side%E2%80%94which+has+received+extraordinarily+little+attention+from+formal-methods+research.%0A%0A+%0A%0AThis+talk+presents+some+of+our+efforts+to+address+this+paucity.+We+apply+ideas+from+cognitive+science%2C+human-factors+research%2C+and+education+theory+to+improve+the+usability+of+formal+methods.+Along+the+way%2C+we+find+misconceptions+suffered+by+users%2C+how+technically+appealing+designs+that+experts+may+value+may+fail+to+help%2C+and+how+our+tools+may+even+mislead+users.%0A%0A+%0A%0ABio%3A%0A%0AShriram+Krishnamurthi+is+a+Professor+of+Computer+Science+at+Brown+University.+With+collaborators+and+students%2C+he+has+created+several+influential+systems+like+DrRacket%2C+Margrave%2C+Flapjax%2C+LambdaJS%2C+Flowlog%2C+and+Pyret.+He+has+also+written+multiple+widely-used+books.+He+also+co-directs+the+Bootstrap+integrated+computing+outreach+program.+For+his+work+he+has+received+SIGPLAN%27s+Robin+Milner+Young+Researcher+Award%2C+SIGPLAN%27s+Software+Award+%28jointly%29%2C+SIGSOFT%27s+Influential+Educator+Award%2C+SIGPLAN%27s+Distinguished+Educator+Award+%28jointly%29%2C+and+Brown%27s+Wriston+and+Philip+J.+Bray+teaching+awards.+He+has+authored+over+twenty+papers+recognized+for+honors+by+program+committees.+He+has+an+honorary+doctorate+from+the+Universit%C3%A0+della+Svizzera+Italiana.%0A%0Ahttps%3A%2F%2Fevents.rochester.edu%2Fevent%2Fcs-seminar-series-shriram-krishnamurthi%0A">Email this event</a></p>]]></description><guid isPermaLink='false'>tag:localist.com,2008:EventInstance_52030204293806</guid><geo:lat>43.126069</geo:lat><geo:long>-77.629191</geo:long><pubDate>Mon, 20 Apr 2026 12:00:00 -0400</pubDate><dc:date>2026-04-20T12:00:00-04:00</dc:date><link>https://events.rochester.edu/event/cs-seminar-series-shriram-krishnamurthi</link><media:content medium='image' url='https://localist-images.azureedge.net/photos/42162424112821/huge/3a1747893cb0cd8b7ed2511c420aa85087481b3e.jpg'/><category>Lectures &amp; Talks</category></item><item><title>May 4, 2026: Design Day at Goergen Athletic Center</title><description><![CDATA[<p>The Rochester community is invited to attend the Hajim School of Engineering &amp; Applied Sciences' Design Day, where hundreds of students showcase their capstone projects in person. Design capstones are culminating experiences for seniors in the Hajim School departments and programs, as well as graduate students in data science and Medical Technology and Innovation master's degree programs.</p>

<p>In addition to providing hands-on opportunities to create something novel, the design projects offer opportunities for students to connect with potential employers, engage in global experiences, and conduct interdisciplinary research with collaborators from places like the University of Rochester Medical Center. Clients who have presented the design capstone teams with problems have been providing input throughout the semester as the students work on solutions under the guidance of their faculty advisors.</p>

<p><a href="https://events.rochester.edu/event/design-day-1645">View on site</a> | <a href="mailto:?subject=I+found+an+interesting+event%3A+Design+Day&amp;body=I+found+an+interesting+event+you+may+like%3A%0A%0A%0ADate%3A+May+4%2C+2026%0A%0ADescription%3A%0AThe+Rochester+community+is+invited+to+attend+the+Hajim+School+of+Engineering+%26+Applied+Sciences%27+Design+Day%2C+where+hundreds+of+students+showcase+their+capstone+projects+in+person.+Design+capstones+are+culminating+experiences+for+seniors+in+the+Hajim+School+departments+and+programs%2C+as+well+as+graduate+students+in+data+science+and+Medical+Technology+and+Innovation+master%27s+degree+programs.%0A%0AIn+addition+to+providing+hands-on+opportunities+to+create+something+novel%2C+the+design+projects+offer+opportunities+for+students+to+connect+with+potential+employers%2C+engage+in+global+experiences%2C+and+conduct+interdisciplinary+research+with+collaborators+from+places+like+the+University+of+Rochester+Medical+Center.+Clients+who+have+presented+the+design+capstone+teams+with+problems+have+been+providing+input+throughout+the+semester+as+the+students+work+on+solutions+under+the+guidance+of+their+faculty+advisors.%0A%0Ahttps%3A%2F%2Fevents.rochester.edu%2Fevent%2Fdesign-day-1645%0A">Email this event</a></p>]]></description><guid isPermaLink='false'>tag:localist.com,2008:EventInstance_52242897508218</guid><geo:lat>43.130306</geo:lat><geo:long>-77.631959</geo:long><pubDate>Mon, 04 May 2026 09:00:00 -0400</pubDate><dc:date>2026-05-04T09:00:00-04:00</dc:date><link>https://events.rochester.edu/event/design-day-1645</link><media:content medium='image' url='https://localist-images.azureedge.net/photos/52242897634181/huge/1ecb7b158241b06e49724039aa048708d34030bb.jpg'/><category>Conferences &amp; Symposia</category></item></channel></rss>