STCamp 2019 Program

 

Program

 

Overview

 

SCIENCE AND TECHNOLOGY AUTUMN CAMP

ARTIFICIAL INTELLIGENCE: FUNDAMENTAL & APPLICATION

(Danang - Vietnam, 21 – 23/10/2019)

Location: University of Science and Education - The University of Danang
Conference Room, 3rd floor, A5 Building

Times

Monday

21/10/2019

Tuesday

22/10/2019

Wednesday

23/10/2019

07:30 – 08:00

Registration

 

 

08:00 – 08:30

Welcome and Local Presentation

 

 

08:30 – 10:00

Lecture 1. Introduction to Deep Learning

Lecture 3. Deep Learning For Natural Language Processing And Beyond

Lecture 5. Proof Script Generation from Proof Scores

10:00 – 10:30

Tea break

Tea break

10:30 – 12:00

Lecture 1. Introduction to Deep Learning (cont)

Lecture 3. Deep Learning For Natural Language Processing And Beyond (cont)

Closing

Certificates and Final Words

12:00 – 13:30

Break

Break

Social Event

13:30 – 15:00

Lecture 2. Binary Symbolic Execution

Lecture 4. Audio/speech information hiding based on human auditory characteristics

15:00 – 15:30

Tea break

Tea break

15:30 – 17:00

Lecture 2. Binary Symbolic Execution (cont)

Lecture 4. Audio/speech information hiding based on human auditory characteristics (cont)

 

Detailed Program

 


 

Registration

 


 

Welcome and Local Presentation

Speakers:

 

  • Associate Prof. Dr Vo Van Minh, Vice-Rector, University of Science and Education, The University of Danang

  • Professor Mizuhito Ogawa.

 


 

Lecture 1: Introduction to Deep Learning

Duration: 3h00

Lecture 3: Deep Learning For Natural Language Processing And Beyond

Duration: 3h00

Lecturer: Prof. Nguyen Le Minh (JAIST)

Practice on Deep Learning: Dr. Tran Duc Vu  - Nguyen Lab (JAIST)

Abstract: The lecture begins with the basic of feed-forward neural network and relevant fundamental knowledge for deep learning. We then introduce more specialized neural network models, including Convolutional Neural Network, Recurrent Neural Network, and attention-based models. In the second part, we will present how these models and techniques can be applied to some interesting problems of natural language processing including sentiment classification, textual entailment recognition, natural language generation, and question answering. The last part of the tutorial will show how we can adapt deep learning and natural language processing techniques for program analysis.



 

Lecture 2: Binary Symbolic Execution

Duration: 3h00

Lecturer: Prof. Mizuhito Ogawa (JAIST)

Abstract: Symbolic execution is an old technique in software engineering, proposed in 1970's. Instead of executing a program with concrete values (e.g., testing), the symbolic execution executes it with symbolic values. The result is a path condition for each program trace, obeying to Hoare logic. Then, by finding a satisfiable instance of the path condition, we obtain a testing instance to trace that path, and the coverage of testing can be guaranteed. Corresponding to the progress of SMT solvers (SAT modulo theory) in 2000's, many symbolic execution tools are developed, e.g., WALA and JPF/SE for Java and Klee for C. From around 2015, several symbolic execution tools have been open to the public, e.g., McVeto, Klee-MC, ANKR, Mayhem, Codisasm, BE-PUM, Corana and SyMIPS. Except for the last two, they are for x86 and the last two are for ARM and MIPS. We overview what is symbolic execution, how symbolic execution is used for binary, and how symbolic execution tools are developed for binary.


 

Lecture 4: Relationship between contributions of the temporal amplitude envelope of speech and modulation transfer function in room acoustics on the perception of noise-vocoded speech

Lecture 4: Audio/speech information hiding based on human auditory characteristics

Duration: 3h00

Lecturer: Prof. Masashi Unoki (JAIST)

Abstract: Audio information hiding (AIH) has recently been focused on as a state-of-the-art technique enabling copyrights to be protected and defended against attacks and tampering of audio/speech content. This technique has aimed at embedding codes as watermarks to protect copyrights in audio/speech content, which are inaudible to and inseparable by users, and at detecting embedded codes from watermarked signals. It has also aimed at verifying whether it can robustly detect embedded codes from watermarked signals (robust or fragile), whether it can blindly detect embedded codes from watermarked signals (blind or non-blind), whether it can completely restore watermarked signals to the originals by removing embedded codes from them (reversible or irreversible), and whether it can be secure against the publicity of algorithms employed in public or private methods. AIH methods, therefore, must satisfy some of the five following requirements to provide a useful and reliable form of watermarking: (a) inaudibility (inaudible to humans with no sound distortion caused by the embedded data), (b) robustness (not affected when subjected to techniques such as data compression and malicious attacks), (c) blind detectability (high possibility of detecting the embedded data without using the original or reference signal), (d) confidentiality (secure and undetectable concealment of embedded data), and (e) reversibility (removable embedded data from the watermarked signal and/or enable watermarking to be re-edited). In this talk, historical and typical AIH methods are introduced and pointed out drawbacks. Then our proposed methods based on human auditory characteristics (cochlear delay, adaptive phase modulation, singular spectrum analysis with psychoacoustic model, and formant enhancement) are introduced.


 

Lecture 5: Proof Script Generation from Proof Scores

Duration: 3h00

Lecturer: Prof Kazuhiro Ogata (JAIST)

Abstract: CafeOBJ is an algebraic specification language and processor. Various kinds of systems, such as security protocols, have been formally specified in CafeOBJ and formally verified by writing proof scores in CafeOBJ. Proof scores are proof plans manually written by human users and subject to human errors in that human users may overlook some cases, which CafeOBJ does not take care of. The 1st implementation of CafeOBJ is done in Common Lisp. There is the 2nd implementation called CafeInMaude on top of Maude, which is a sibling language of CafeOBJ. CafeInMaude has a proof assistant called CafeInMaude Proof Assistant (CiMPA) and a proof generator called CafeInMaude Proof Generator (CiMPG). Given proof scores written in CafeOBJ, CiMPG automatically generates proof scripts that can be fed into CiMPA. If the generated proof scripts can be discharged by CiMPA, the proof scores are correct. Thus, CiMPG can check if proof scores are correct.  An example is used to describe how to write systems specifications in CafeOBJ, how to write proof scores in CafeOBJ and how to use CiMPA and CiMPG.


 

Closing

 


 

Certificates and Final Words