Erinevus lehekülje "Machine learning" redaktsioonide vahel
21. rida: | 21. rida: | ||
This is used to spread information about the course in this semester as well as any other machine learning related event happening in TUT (also in future). | This is used to spread information about the course in this semester as well as any other machine learning related event happening in TUT (also in future). | ||
− | + | Homework rankings based on results (just for fun): [[Meedia:Ranking.pdf|Ranking]] <br \> | |
− | + | ||
+ | '''NB!''' No lecture on 18.04.2014. Instead of that, we will have a joint session for solving homework problems on Thursday 17.04 starting from 14:00 in ICT-411. | ||
== Assignments == | == Assignments == |
Redaktsioon: 15. aprill 2014, kell 17:34
Spring 2013/2014
ITI8565: Machine learning
Taught by: Kairit Sirts
EAP: 6.0
Time and place: Fridays
Lectures: 16:00-17:30 X-406 Labs: 17:45-19:15 X-412
Additional information: sirts@ioc.ee, juhan.ernits@ttu.ee
Skype: kairit.sirts
The course is organised by the Department of Comptuer Science. The course is supported by IT Academy.
Students should also subscribe to machine learning list. This is used to spread information about the course in this semester as well as any other machine learning related event happening in TUT (also in future).
Homework rankings based on results (just for fun): Ranking
NB! No lecture on 18.04.2014. Instead of that, we will have a joint session for solving homework problems on Thursday 17.04 starting from 14:00 in ICT-411.
Assignments
First homework about decision trees is open in moodle. For submitting you have to register to the course
Second homework about KNN and K-means is open in moodle.
Third homework about neural networks is open in moodle.
Data for the third homework
Fourth homework about linear and logistic regression is open in moodle.
Data for the fourth homework
Lecture 1: Introduction, decision trees
Example made in class - When to play tennis?
Reading - contains also the full algorithm for decision tree learning with divide-and-conquer strategy.
Lecture 2: K nearest neighbours
Lecture 3: K-means clustering, MLE principle
Lecture 4: Gaussian Mixture Model, EM algorithm
Lecture 5: History of neural networks, perceptron
Lecture 6: Artificial neural networks
Lecture 7: Linear regresssion
Lecture 8: Logistic regresssion
Lecture 9: Naive Bayes, maximum entropy model
Reading about Naive Bayes, section 2, lecture notes by Andrew Ng
Tutorial about log-linear modeling by Jason Eisner