Open Students Theses

Open Students Theses


   Subscribe to KDnuggets News  |
 |  Contact

  • News/Blog
  • Top stories
  • Opinions
  • Tutorials
  • JOBS
  • Companies
  • Courses
  • Datasets
  • Certificates
  • Meetings
  • Webinars

KDnuggets Home » News » 2017 » Apr » Tutorials, Overviews » Top 20 Recent Research Papers on Machine Learning and Deep Learning (  17:n14  )

Silver Blog, 2017Top 20 Recent Research Papers on Machine Learning and Deep Learning

Previous post
Next post

http likes 871

Tags: Deep Learning , Machine Learning , Research , Top list , Yoshua Bengio

Machine learning and Deep Learning research advances are transforming our technology. Here are the 20 most important (most-cited) scientific papers that have been published since 2014, starting with “Dropout: a simple way to prevent neural networks from overfitting”.

c comments

Machine learning, especially its subfield of Deep Learning, had many amazing advances in the recent years, and important research papers may lead to breakthroughs in technology that get used by billions of people. The research in this field is developing very quickly and to help our readers monitor the progress we present the list of most important recent scientific papers published since 2014.
The criteria we used to select the 20 top papers are by using citation counts from three academic sources: ; ; and . Since the number of citations varied among sources and are estimated, we listed the results from which is slightly lower than others.

For each paper we also give the year it was published, a Highly Influential Citation count (HIC) and Citation Velocity (CV) measures provided by .  HIC that presents how publications build upon and relate to each other is result of identifying meaningful citations. CV is the weighted average number of citations per year over the last 3 years. For some references, where CV is zero that means it was blank or not shown by .

Most (but not all) of these 20 papers, including the top 8, are on the topic of Deep Learning. However, we see strong diversity – only one author (Yoshua Bengio) has 2 papers, and the papers were published in many different venues: CoRR (3), ECCV (3), IEEE CVPR (3), NIPS (2), ACM Comp Surveys, ICML, IEEE PAMI, IEEE TKDE, Information Fusion, Int. J. on Computers & EE, JMLR, KDD, and Neural Networks. The top two papers have by far the highest citation counts than the rest. Note that the second paper is only published last year. Read (or re-read them) and learn about the latest advances.

  1. Dropout: a simple way to prevent neural networks from overfitting , by Hinton, G.E., Krizhevsky, A., Srivastava, N., Sutskever, I., & Salakhutdinov, R. (2014). Journal of Machine Learning Research, 15, 1929-1958. (cited 2084 times, HIC: 142 , CV: 536).

    Summary: The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. This significantly reduces overfitting and gives major improvements over other regularization methods

  2. Deep Residual Learning for Image Recognition , by He, K., Ren, S., Sun, J., & Zhang, X. (2016). CoRR, abs/1512.03385. (cited 1436 times, HIC: 137 , CV: 582).
    Summary: We present a residual learning framework to ease the training of deep neural networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
  3. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , by Sergey Ioffe, Christian Szegedy (2015) ICML. (cited 946 times, HIC: 56 , CV: 0).
    Summary: Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change.  We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs.  Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
  4. Large-Scale Video Classification with Convolutional Neural Networks , by Fei-Fei, L., Karpathy, A., Leung, T., Shetty, S., Sukthankar, R., & Toderici, G. (2014). IEEE Conference on Computer Vision and Pattern Recognition (cited 865 times, HIC: 24 , CV: 239)
    Summary: Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes .
  5. Microsoft COCO: Common Objects in Context , by Belongie, S.J., Dollár, P., Hays, J., Lin, T., Maire, M., Perona, P., Ramanan, D., & Zitnick, C.L. (2014). ECCV. (cited 830 times, HIC: 78 , CV: 279) Summary: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
  6. Learning deep features for scene recognition using places database , by Lapedriza, À., Oliva, A., Torralba, A., Xiao, J., & Zhou, B. (2014). NIPS. (cited 644 times, HIC: 65 , CV: 0)
    Summary: We introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity.
  7. Generative adversarial nets , by Bengio, Y., Courville, A.C., Goodfellow, I.J., Mirza, M., Ozair, S., Pouget-Abadie, J., Warde-Farley, D., & Xu, B. (2014) NIPS. (cited 463 times, HIC: 55 , CV: 0)
    Summary: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
  8. High-Speed Tracking with Kernelized Correlation Filters , by Batista, J., Caseiro, R., Henriques, J.F., & Martins, P. (2015). CoRR, abs/1404.7584. (cited 439 times, HIC: 43 , CV: 0)
    Summary: In most modern trackers,  to cope with natural image changes, a classifier is typically trained with translated and scaled sample patches. We propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude.
  9. A Review on Multi-Label Learning Algorithms , by  Zhang, M., & Zhou, Z. (2014). IEEE TKDE,  (cited 436 times, HIC: 7 , CV: 91)
    Summary:  This paper aims to provide a timely review on multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously.
  10. How transferable are features in deep neural networks , by Bengio, Y., Clune, J., Lipson, H., & Yosinski, J. (2014) CoRR, abs/1411.1792. (cited 402 times, HIC: 14 , CV: 0)
    Summary: We experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected.
  11. Do we need hundreds of classifiers to solve real world classification problems , by Amorim, D.G., Barro, S., Cernadas, E., & Delgado, M.F. (2014).  Journal of Machine Learning Research (cited 387 times, HIC: 3 , CV: 0)
    Summary: We evaluate 179 classifiers arising from 17 families (discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods). We use 121 data sets from UCI data base to study the classifier behavior, not dependent on the data set collection. The winners are the random forest (RF) versions implemented in R and accessed via caret) and the SVM with Gaussian kernel implemented in C using LibSVM.
  12. Knowledge vault: a web-scale approach to probabilistic knowledge fusion , by Dong, X., Gabrilovich, E., Heitz, G., Horn, W., Lao, N., Murphy, K., … & Zhang, W. (2014, August). In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining  ACM. (cited 334 times, HIC: 7 , CV: 107).
    Summary: We introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories for constructing knowledge bases. We employ supervised machine learning methods for fusing  distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness.
  13. Scalable Nearest Neighbor Algorithms for High Dimensional Data , by Lowe, D.G., & Muja, M. (2014). IEEE Trans. Pattern Anal. Mach. Intell., (cited 324 times, HIC: 11 , CV: 69).
    Summary: We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms.  In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper.
  14. Trends in extreme learning machines: a review , by Huang, G., Huang, G., Song, S., & You, K. (2015).  Neural Networks,  (cited 323 times, HIC: 0 , CV: 0)
    Summary: We aim to report the current state of the theoretical research and practical advances on Extreme learning machine (ELM). Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks.  Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics.
  15. A survey on concept drift adaptation , by Bifet, A., Bouchachia, A., Gama, J., Pechenizkiy, M., & Zliobaite, I.  ACM Comput. Surv., 2014 , (cited 314 times, HIC: 4 , CV: 23)
    Summary: This work aims at providing a comprehensive introduction to the concept drift adaptation that refers to an online supervised learning scenario when the relation between the input data and the target variable changes over time.
  16. Multi-scale Orderless Pooling of Deep Convolutional Activation Features , by Gong, Y., Guo, R., Lazebnik, S., & Wang, L. (2014).  ECCV(cited 293 times, HIC: 23 , CV: 95)
    Summary:  To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN).
  17. Simultaneous Detection and Segmentation , by Arbeláez, P.A., Girshick, R.B., Hariharan, B., & Malik, J. (2014) ECCV , (cited 286 times, HIC: 23 , CV: 94)
    Summary: We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS).
  18. A survey on feature selection methods , by Chandrashekar, G., & Sahin, F.  Int. J. on Computers & Electrical Engineering, (cited 279 times, HIC: 1 , CV: 58)
    Summary: Plenty of feature selection methods are available in literature due to the availability of data with hundreds of variables leading to data with very high dimension.
  19. One Millisecond Face Alignment with an Ensemble of Regression Trees , by Kazemi, Vahid, and Josephine Sullivan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2014, (cited 277 times, HIC: 15 , CV: 0)
    Summary: This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face’s landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions.
  20. A survey of multiple classifier systems as hybrid systems , by Corchado, E., Graña, M., & Wozniak, M. (2014). Information Fusion, 16, 3-17. (cited 269 times, HIC: 1 , CV: 22)
    Summary: A current focus of intense research in pattern classification is the combination of several classifier systems, which can be built following either the same or different models and/or datasets building.

Previous post
Next post

Top Stories Past 30 Days

Most Popular
  1. Essential Math for Data Science: Why and How
  2. Machine Learning Cheat Sheets
  3. 5 Data Science Projects That Will Get You Hired in 2018
  4. Data Visualization Cheat Sheet
  5. Journey to Machine Learning – 100 Days of ML Code
  6. 9 Must-have skills you need to become a Data Scientist, updated
  7. Neural Networks and Deep Learning: A Textbook
Most Shared
  1. Machine Learning Cheat Sheets
  2. You Aren’t So Smart: Cognitive Biases are Making Sure of It
  3. 6 Steps To Write Any Machine Learning Algorithm From Scratch: Perceptron Case Study
  4. A Winning Game Plan For Building Your Data Science Team
  5. Hadoop for Beginners
  6. Learning mathematics of Machine Learning: bridging the gap
  7. How to Create a Simple Neural Network in Python

Latest News

  • Big career opportunities in big data
  • BIG, small or Right Data: Which is the proper focus?
  • Top Stories, Oct 1-7: Machine Learning Cheat Sheets; Ho…
  • Things you should know when traveling via the Big Data …
  • Colorado State University: Assistant Professor in Indus…
  • Online Master’s in Applied Data Science From Syracuse

More Recent Stories

  • Online Master’s in Applied Data Science From Syracuse
  • Basic Image Data Analysis Using Python – Part 4
  • Why do I Call Myself a Data Scientist?
  • A Concise Explanation of Learning Algorithms with the Mitchell…
  • University of Nebraska at Omaha: Faculty Position in Computer …
  • Understand Why ODSC is the Most Recommended Conference for App…
  • Big Data Day Camp: Big Data Tools  Techniques (October 2…
  • UnitedHealth Group: Sr .Net Web Developer, UHC EI [India…
  • Semantic Segmentation: Wiki, Applications and Resources
  • UnitedHealth Group: UHC Digital Director of Project Management…
  • UnitedHealth Group: UHC Digital Project Manager [Minnetonka, MN]
  • 3 Stages of Creating Smart
  • Society of Machines: The Complex Interaction of Agents
  • Top tweets, Sep 26 – Oct 2: Why building your own Dee…
  • Data Science at Northwestern
  • Top 3 Trends in Deep Learning
  • Linear Regression in the Wild
  • Sequence Modeling with Neural Networks – Part I
  • Upcoming Meetings in AI, Analytics, Big Data, Data Science, De…
  • KDnuggets 18:n37, Oct 3: Mathematics of Machine Learning; E…

KDnuggets Home » News » 2017 » Apr » Tutorials, Overviews » Top 20 Recent Research Papers on Machine Learning and Deep Learning (  17:n14  )

© 2018 KDnuggets. About KDnuggets .   Privacy policy . Terms of Service

Subscribe to KDnuggets News



  • Browse

  • Log in

  • Sign up

  1. 2PN

  2. Home

Workspace Navigation

Machine learning and data science project topics

Topics for the course CS-E4870 Research Project in Machine Learning and Data Science.

Topic #1: Understanding information propagation on WhatsApp

Background: WhatsApp is one of the biggest peer to peer messaging platforms in the world, with over 800 million users world wide. The project aims to understand how information spreads on WhatsApp. We collected around a million messages exchanged on WhatsApp using data from public groups. From this data, we plan to answer questions such as: (i) Understanding the dynamics of group formation on WhatsApp, (ii) What information are people sharing? and when? (iii) what content becomes viral and why? Though there have been studies on understanding information propagation on social networks, WhatsApp, with its peer to peer model is a novel way of communication and can provide novel insights into these processes. The tasks involved in the project are mostly open ended, exploratory, and can be discussed and modified according to the strengths of the student. If you have any questions regarding the project, please contact Kiran.

Prerequisite: The topic needs basic knowledge of dealing with databases (in SQL), programming in Python, and ability/interest dealing with gigabytes of data. Disclaimer: The data might contain adult content.

Instructor (name and email): Kiran Garimella ([email protected] )

Topic presented by: instructor

Topic available: yes

Topic #2: Reputation system for Bluetooth Beacons

Background: Bluetooth Beacon is one recent technological trend that is getting widely adopted for various purposes such as contextual information sharing similarly to RFID tags or QRCodes with the difference that information is shared without user interaction (no explicit scanning). This technology is increasingly used for providing contextual information in public places (e.g. shopping malls, museum, banks, art galleries etc.) through broadcasting URLs containing context specific information. Whenever a user passes nearby a deployed device, she can see a pop-up notification on the screen of her smartphone showing a list of URLs broadcasted by nearby beacons. In this context of passive reception of contextual information, it is easy for an attacker to deploy rogue beacons broadcasting malicious links impersonating legitimate ones (e.g. phishing links).
We developed and deployed an Android application on several smartphones collecting and exporting historical data about beacons observed in a given location, at a given time and the associated information (URL)
This project aims to analyse this data and infer relevant features depicting for instance the stability of a beacon and the information it broadcasts overtime in order to infer a reputation score. Once features identified, a classifier can be built to automatically give a reputation score to a beacon. This can help informing users about the legitimacy of a received link and favour her choice towards visiting safe websites.

Prerequisite: data mining, reputation system knowledge, Python programming, dealing with MongoDB (optional)

Instructor (name and email): Samuel Marchal ([email protected] )

Topic presented by: instructor

Topic available: yes

Topic #3: Extreme multi-label classification in networked data with textual content

Background: Extreme multi-label (XML) classification is one type of supervised machine learning problems that has gained popularity recently. In this setting, each data instance has one or more labels. The main challenge for XML learning is the huge label space which can be thousands even millions.

In this project, we focus on XML learning in networked data with textual information. Such data is rich in real-world. For example, labels/tags each question, meanwhile underlying network structures inter-link questions/answers and users. Another similar example is Github repositories which are tagged and linked with other entities.

We will explore different directions to tackle this task: 1) network representation learning (NRL) is empirically demonstrated to improve supervised learning performance  Can we leverage the power of NRL as well? 2) deep learning for text modeling has gained great success. Can we apply deep learning techniques such as CNN/RNN for our goal? 3) XML learning usually requires different training loss function compared to traditional binary/multi-class classification problems. Can we find a better loss function for our tasks?

Prerequisite: supervised machine learning, deep learning (word2vec, CNN), machine learning programming (for example, Python, Tensorflow, but not limited)

Instructor (name and email): Han Xiao ([email protected] )

Topic presented by: instructor

Topic available: yes

Topic #4: Comparison of generative models for MLaaS model stealing attacks

Background: Machine Learning as a Service (MLaaS) is a new service paradigm, which outsources machine learning models to cloud service providers. Subsequent predictions on the server may require the client to pay for the service. The cloud-deployed models are however subject to so-called model stealing attacks [1, 2], where a malicious user tries to obtain a perfect replicate (in case it is a linear model) or a near-replicate of the MLaaS model. If the adversary has detailed knowledge of probable training/testing data samples, he can attack such MLaaS systems faster [2]. Such knowledge lets the adversary understand approximate distributions of data (manifolds), and this allows him to hot-start the search for the decision boundaries that characterize the model. These can be used to generate query points to send to the server, to improve upon the attacker’s version of the MLaaS-deployed model.

In this project, the student is expected to compare the effectiveness of generative models on a few vision datasets (MNIST, SVHN, and/or GTSRB) under different levels of prior knowledge of correct (data, label) pairs. Generative models of interest include the baseline probabilistic PCA [3] formulations, and a few different architectures of autoencoders [4] and generative adversarial networks (DCGAN) [5]. A scientific report with  a comparison on the axis runtime vs. model extraction efficiency and number of prior known examples is expected.

[1] Tramèr, Florian, et al. "Stealing Machine Learning Models via Prediction APIs." USENIX Security Symposium. 2016.

[2] Papernot, Nicolas, et al. "Practical black-box attacks against machine learning." Proceedings of the 2017 ACM on Asia Conference
on Computer and Communications Security. ACM, 2017.

[3] Tipping, M. and Bishop, C. "Probabilistic Principal Component Analysis". link:

[4] Bengio, Yoshua, et al. "Generalized denoising auto-encoders as generative models." Advances in Neural Information Processing Systems. 2013.

[5] Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).

Prerequisite: Machine learning and programming skills. At least basic understanding of neural networks, optimization and papers.

Instructor (name and email):  Mika Juuti ( [email protected] )

Topic presented by: instructor

Topic available: yes

Topic #5: Characterizing richness of summary for word-vector representations

Background: Advances in natural language parsing have enabled fast approximate sentence parsing with neural networks [1]. Recently, a structured model for text extraction using a structure called AETR (Agent-Event-Theme-Recipient) has been introduced,which structures sentence clauses into 4-word bags-of-word-representations. Such a representation incorporates more semantic meaning of sentences than traditional n-gram bag-of-words representations do. Such a representation can be characterized with a context vector [2]. It is however unclear how well this representation is able to preserve the information content of a source sentence. 
The purpose of this assignment is to characterize how well a specific type of context vector representation (AETR) encompasses the information in some different types of sentences. The richness of such a representation can be characterized by trying to train are current neural network (LSTM) to reconstruct a given sentence from a context representation. The comparison will be done in parallell with a standard neural machine translation network. The student is advised to start from ready-made online code [4] for seq2seq translation, which contains a tutorial for translating English to French in pytorch. The student needs to do some minor modifications in the code to achieve this task. In addition to this, the student is expected to compare a few different variations of AETR, and some standard context-representations such as seq2seq-model learnt context vectors [4] and word2vec representations. The goodness of the representation can be evaluated with automated language translation metrics [5]. Training a neural network takes time, and the student is advised to begin early.

 [1] Honnibal, M. "spaCy (Version 0.100. 6)[Computer software]." (2016).

[2] Jurafsky, D. and Martin J. "Speech and Language Processing (3rd ed. draft)". Chapters 15 and 16. link: (accessed 5.9.2017)

[3] Olah, C. "Understanding LSTM networks". link: (accessed 5.9.2017)

[4] Robertson, S. Translation with a Sequence to Sequence Network and Attention". link: (accessed 5.9.2017)

[5] Papineni, K.; Roukos, S.; Ward, T.; Zhu, W. J. (2002). BLEU: a method for automatic evaluation of machine translation (PDF). ACL-2002: 40th Annual meeting of the Association for Computational Linguistics.

Prerequisite: Good programming skills (preferrably, python). Machine learning and neural network skills.

Instructor (name and email): Mika Juuti ( [email protected] ) and Tommi Gröndahl ( [email protected] )

Topic presented by: instructor

Topic available: yes

Topic #6: Deep Reinforcement Learning for Big Data Networks

Background:  Explore the different RL algorithms for solving the problem of joint sampling and recovery for graph signals representing massive network-structured datasets. 

Prerequisite: Good programming skills. Machine learning and neural network skills.

Instructor (name and email):  [email protected]

Topic presented by: instructor

Topic available: yes

Topic #7: Information Flow over Complex Graphical Models

Background:   Exploring the inferential properties of large complex graphical models, underlying e.g.,  deep neural networks, via tools from Shannon information and the theory of complex networks.

Prerequisite: Good programming skills. Machine learning and neural network skills.

Instructor (name and email):  [email protected]

Topic presented by: instructor

Topic available: yes

Topic #8: Various Bayesian data analysis and probabilistic programming topics

Background:  Various data analysis projects related to Bayesian data analysis, especially data analysis workflow, Bayesian model assessment, comparison and selection, priors, approximative inference methods, structural and sparsifying priors for probabilistic programming generally and especially for Stan. In addition various projects related to Gaussian processes for survival models and Bayesian optimization with GPy.

Prerequisite: Bayesian data analysis. Good programming skills in R or Python.

Instructor (name and email): [email protected]

Topic presented by: instructor

Topic available: yes

Topic #9:  Gaze prediction in egocentric vision

Background:  Predicting where a person may look in videos recorded from his/her perspective is an interesting topic for many applications. You will be implementing an algorithm for this task using deep learning frameworks. See Fathi et al. Learning to Predict Gaze in Egocentric Video, ICCV 2013  

Prerequisite:  Programming skills, interest in computer vision, deep learning, and tensorflow.

Instructor (name and email): Hamed R. Tavakoli ([email protected] )

Topic presented by: instructor 

Topic available: yes 

Topic #10:  Image captioning

Background:  Describing content of an image using human language is a topic that has gained great interest in recent years. You will be implementing a method for this task on the line with our previous research, Shetty et al. Exploiting scene context for image captioning

Prerequisite:  Programming skills, interest in computer vision, natural language processing, deep learning and tensorflow.

Instructor (name and email): Hamed R. Tavakoli ([email protected] )

Topic presented by: instructor 

Topic available: yes 

Topic #11:  Comparing features from natural image statistics

Background:    In this project, you will be comparing various techniques for learning features from image statistics. Various properties, like, robustness to noise, translation, rotation, match to human eye statistics, etc, will be studied. Please see Hyvärinen et al Natural Image Statistics, Springer, 2009, for further details.

Prerequisite:  Programming skills, interest in computer vision and machine learning.

Instructor (name and email): Hamed R. Tavakoli ([email protected] )

Topic presented by: instructor 

Topic available: yes 

Topic #12: Beauty in roots of polynomials over finite fields?  

Background: As G. H. Hardy put it, "beauty is the first test: there is no permanent place in the world for ugly mathematics." For example, it can be argued that the roots of certain families of polynomials, when visualized on the complex plane, are just plain beautiful. But what if we are not working over the complex numbers but rather over a finite field, and with multivariate polynomials? How does one visualize the roots over a finite field to unlock beauty? Our interest is to explore the roots of structured families of low-degree polynomials (e.g. homogeneous polynomials with 0,1-coefficients). 

In case you are wondering why low-degree structured polynomials over finite fields are of interest to our algorithms community at Aalto, see, for example,  here .  

Prerequisite:  This project requires a pioneering spirit in discrete mathematics with skills and insight into computational exploration and visualization. 

Instructor (name and email): Petteri Kaski ([email protected] )

Topic presented by:  instructor

Topic available: yes

Topic #13: Machine Learning in Genomics

Background: Identifying genes responsible for a disease help researchers develop better treatments for the disease. Mathematical biologist Eric Lander described the human genome with the words: "Genome: Bought the book; hard to read.” Advances in genomic technologies over the recent decades have resulted in a flood of genomics data making machine learning methods such as deep learning applicable to the problem. In this project, you will do both theoretical and experimental work on these methods in the problem context. The project can be based on a subproblem (multiple alternatives) from an ongoing research project.

Prerequisite: Excellent skills in machine learning, programming (Python and R are preferred) and mathematics. 

Instructor (name and email): Jaakko Reinvall ( [email protected] )

Topic presented by: instructor

Topic available: yes 

Topic #14: Replicating Inverse Reinforcement Learning Studies  


Motivation: Inverse reinforcement learning (IRL) is mainly concerned with inferring the reward function of an agent, based on observations of it’s behavior in a certain environment. This can be contrasted to reinforcement learning (RL), which is concerned in finding optimal behavior policies, given defined rewards and environment. IRL has many applications in fields such as robotics and human-computer interaction. For example, if we could better understand the reward function that humans use for moving around in rough terrain, we could help bipedal robots achieve the same level of performance. Or, if we could understand better why humans interact in particular ways with a computer system, we could use that information for designing better user interfaces. 

Objective: The objective of this project is to replicate the results from one (or more) key papers in the field of IRL (see references for examples). The paper can be chosen based on the skills and interests of the student. The project should also produce a neat, documented and sufficiently high-level implementation of the chosen IRL algorithm, preferably implemented in Python, that can be used for solving other well-defined IRL problems as well. 

Methodology: The student will first read relevant scientific papers to get sufficient understanding of the topic and the related mathematics. After this, the student will propose how she plans to implement the replication study. After the implementation, the student will write a short report that explains the implementation and the results of her replication study, accompanied by the full implementation source code (eg. accessible at Github). 

Prerequisite:  Capability to work with advanced mathematics. Good programming skills (preferably Python). Understanding of the basics of RL. As this project may be quite challenging, students can also apply in pairs. The student has to have decent background in Machine Learning and can be able to work independently. PhD students preferred.

Instructor (name and email):  Kangasraasio Antti ([email protected] ) 

Topic presented by: instructor

Topic available: yes

Topic #15: Application of structured feature learning in bioinformatics

Background: Studying a structural learning problem where the task is to predict characteristic properties of molecules using their corresponding measurements from chemical analysis, e.g. mass spectra. To do that, features have to be extracted first from the mass spectra. For that we apply unsupervised learning to extract latent substructures from mass spectra. As feature extraction method Latent Dirichlet Allocation (LDA) is applied, which is frequently used for topic extraction in text processing. Recently LDA has been successfully applied to extract features from mass spectra [1, 2]. Using the extracted features we us a structured kernel regression to predict the molecules’ characteristic properties [3]. The implementations for the feature extraction and structured prediction are available.

[1] Topic modeling for untargeted substructure exploration in metabolomics, Justin Johan Jozias van der Hooft (2016)
[2] (Paper, Feature- and LDA-code in R/Python available)
[3] Fast metabolite identification with Input Output Kernel Regression, Brouard C. (2016)

Prerequisite: Basic knowledge in Machine Learning (e.g. Baysian network, Kernel methods), Programming skills in Python, R and Matlab

Instructor (name and email): Eric Bach ([email protected] )

Topic presented by: instructor

Topic available: yes

Topic #16: Non-stationary Gaussian processes with TensorFlow for million+ datapoints

Background: Gaussian processes (GP) are one of the only models that can challenge deep neural networks. GP’s have been demonstrated on datasets up to a billion data points, while retaining statistical interpretability. We have recently developed a small-scale non-stationary GP model that can find adaptive (non-stationary) dynamics in signals for dramatic improvement in interpretability [Heinonen et al: Non-stationary Gaussian processes with Hamiltonian Monte Carlo, 2016].
Objective: In this project a large-scale TensorFlow implementation of the non-stationary GP is done, and applied to million+ data points (eg. climate data). The main task is to extend an existing TensorFlow GP implementation with non-stationarity. 
Prerequisite: Knowledge of Python, and knowledge of variational inference are recommended. Some knowledge of Gaussian processes is useful.

Instructor (name and email): Markus Heinonen (markus.o.heinonen )

Topic presented by: instructor

Topic available: yes

Topic #17: Data visualisation with Spectral GPLVM in TensorFlow

Background: GPLVM is a Gaussian process model that maps high-dimensional complex data into eg. 2D projections for visualisation. GPLVM has been for instance applied to layout images of faces in 2D such that facial similarities are retained in both spaces. It is a non-linear, Bayesian variant of PCA. On the other hand, spectral models have very recently been developed to uncover frequencies present in data points. Previously these models have been applied to find spectral features of data that are predictive towards a target variable. 
Objective: In this project, spectral features are explored instead in terms of data visualisation and dimensionality reduction (unsupervised learning) by combining them with the GPLVM model implemented. The main task is to extend the existing TensorFlow GPLVM implementation with a spectral kernel, and explore it on various datasets for data visualisation, and compare to existing approaches. 
Prerequisite: Knowledge of Python, and Gaussian processes or variational inference are recommended.

Instructor (name and email): Markus Heinonen (markus.o.heinonen )

Topic presented by: instructor

Topic available: yes

Topic #18: Constructing 2nd generation convolutional neural network architectures

Background: The basic architectural design of convolutional neural networks (CNNs, convnets) for image analysis has persisted in a form containing several layers of convolutions, occasional pooling and subsampling. In such a pipeline, from layer to layer, the receptive fields slowly grow with depth, eventually encompassing the entire intput. Representative works following this recipe include LeNet, AlexNet ( ), and many others, such as VGG, GoogLeNet, and ResNets. From now on, the listed works are called as the 1st generation (1G) of CNN architectures. In this project, the student concentrates on so called 2nd generation (2G) CNNs. By these it is meant networks that contain multiple parallel paths, perhaps densely connected ( ), and taking multiple scales better into consideration. The student will write a literature review of the recent advances in the design of novel CNN architectures, and experimentally evaluates and compares a 2G CNN of his/her choice to some of those 1G CNNs. The evaluation is to be done for different tasks that is, at minimum, either MNIST digit classification or CIFAR object recognition. There is also the possibility to pursue new ideas and improvements after a sufficient understanding of literature is achieved.

Prerequisite: Programming skills (Python, C++) and basics of Machine Learning. 

Instructor (name and email): Juha Ylioinas ([email protected] )

Topic presented by: instructor

Topic available: yes


Topic #19: Visual analysis of heraldic images, such as coats of arms, and/or textual analysis of their natural language descriptions aka blazons.

Background:    coat of arms  is an  heraldic   visual design  on an  escutcheon  (i.e.,  shield ),  surcoat , or  tabard . The coat of arms on an escutcheon forms the central element of the full  heraldic achievement  which in its whole consists of shield,  supporters crest , and  motto . A coat of arms is traditionally unique to an  individual person family , state,  organisation  or  corporation In  heraldry  and heraldic  vexillology , a  blazon  is a formal description of a  coat of arms flag  or similar  emblem , from which the reader can reconstruct the appropriate image. In this project the student will implement some simple approaches for the visual analysis of coat of arms images and/or their textual descriptions.  Depending on the student’s interests the focus can be on only either one of the approaches, visual or textual.

Prerequisite: Knowledge on image analysis and/or natural language (English and/or Finnish) processing

Instructor (name and email): Jorma Laaksonen ([email protected] )

Topic presented by: instructor

Topic available: yes 

Topic #20: Distributed probabilistic matrix factorisation (large-scale recommendation systems)

Background: Matrix factorisation (MF) has shown to be a powerful approach to practical applications of collaborative filtering for recommender systems, as was shown during the Netflix competition:  Matrix Factorization for recommender systems . It has been shown that probabilistic MF (PMF) yields stronger results than non-probabilistic MF , furthermore Bayesian PMF (BPMF) is able to automate the choice of suitable regularisation to avoid overfitting, a problem with large-scale matrix factorisation, and yield even better results than previous methods. The downside to BPMF is the computational cost. To address this current research is searching for methods to distribute the computational load of inference of BPMF and improve the inference methods that can use variational inference or MCMC sampling methods. Research in this area is focused on addressing the issue of improving the efficiency of infereing large scale BPMF or similar approaches.

Prerequisite: Understaning of machine learning fundamentals, Statistics and probability (basics of Bayesian inference), linear algebra and programming skills (MATLAB or Python preferable) if interested in the application of an existing method.

Instructor (name and email): Jonathan Strahl ([email protected] )

Topic presented by: instructor

Topic available: yes 

Topic #21:  Understanding the evolution of gender diversity from LinkedIn data  


In this project, we gather data from the professional social network LinkedIn in order to understand gender trends in employment across various dimensions: (i) Education level, (ii) sector, (iii) location, etc. For example, are there cities where it is more likely to find women in more senior positions on LinkedIn? How much does this depend on the sector under consideration?

The scope of the project could be broadened depending on the type and quantity of data collected.

The project contains two parts.
(i) Data collection — the first part of the project is to collect data from the LinkedIn advertising platform. This is done using browser automation tools like Selenium.
(ii) Analysis of the data and documenting the results.

The project offers the opportunity to learn about interdisciplinary data science research, in particular in the domain of Computational Social Science.

Prerequisite:    The project requires strong programming skills, preferably in Python. Experience with web scraping (for instance, using Selenium) would help.

Instructor (name and email): Kiran Garimella ([email protected]), Ingmar Weber ([email protected]), Emilio Zagheni (UW)

Topic presented by: instructor

Topic available: yes 

Topic #22: Learning automata from neural networks and using them for verification

Background:  The general goal is to verify a neural network. The approach is: (1) learn an automaton model of the neural network; (2) verify the automaton; (3) draw conclusions about the original neural network; (4) (possibly) iterate until convergence. We will use off the shelf learning and verification tools, and blend them in a useful recipe.

Prerequisite: None

Instructor (name and email): Stavros Tripakis ([email protected])

Topic presented by: instructor

Topic available: yes 

Topic #23: Visual odometry and SLAM

Background:  Visual odometry is the process of determining the position and orientation of a moving camera by analysing the associated images. By means of visual odometry it is possible to track the motion of a moving robot in real-time. Besides tracking, simultaneous localization and mapping (SLAM) involves the problem of map building (i.e. learning) and utilisation of the built maps for relocalisation. Thanks to the advances in both software and hardware of mobile devices, it is nowadays possible to do visual odometry and SLAM on standard smartphones and laptops. Furthermore, some of the most recent systems are available as open-source implementations (see e.g. , ). Traditionally the approaches have utilised hand-crafted image features but some recently proposed approaches are based on learnt image representations (e.g. ). In this B.Sc. thesis project, the student will familiarise himself/herself with the previous literature and current state-of-the-art approaches, and try them in practice too. The thesis can focus on analysing the limitations and possibilities of current systems, and also look at possible improvements of existing systems.

Prerequisite: Programming skills.

Instructor (name and email): Arno Solin ([email protected])

Topic presented by: instructor

Topic available: yes

Topic #24: Learning Markov Networks from Data


Markov networks (also known as Markov random fields) are graphical models used in statistical and probabilistic inference. In Markov network learning, the goal is to find out an optimal network structure that best reflects the dependencies in a given dataset. The learning problem is computationally very challenging, and it has been approached by a number of techniques such as local search, dynamic programming, constraint satisfaction, etc. The constraint-based approaches have been based on, e.g., answer-set programming (ASP), maximum Boolean satisfiability (MaxSAT), and linear programming (LP). What is common to these techniques is that they enable the maximization of the network score (based on log likelihood) over the candidate structures.

The goal of this project is to get acquainted with the variety of techniques that have been proposed to solve the Markov network learning problem. It is also possible to include an experimental part to the project where the different methods are tried out in practice using some benchmark problems. Also, some tool development is also feasible in the project. The exact content is to be agreed with the instructor.

Prerequisite: Students need to have a basic understanding of bayesian networks and programming skills.

Instructor (name and email): Tomi Janhunen [email protected]

Topic presented by: instructor

Topic available: yes

Topic #25: Automated design and molecular simulation of 3D RNA nanostructures

Background: Over the past few decades, nucleic acids have been successfully employed as building blocks for assembling a variety of nanoscale objects with dimensions around a hundredth of a millionth of a meter. 

Along this line, we (Benson et al., Nature 2015) had previously demonstrated DNA molecular renders of complex 3D polyhedral meshes including a benchmark computer graphics bunny model.
With the aim of large-scale production in living systems, interest is growing in isothermal folding of RNA molecules which are more suitable in this respect than thermal annealing of DNA strands.
Analogously to our previous work with DNA nanostructures, we are presently building software tools for design of RNA based nanoscale polyhedra. In the proposed project the student participates
in the development of this tools by complementing our current RNA sequence generator tool with a generator of initial 3D configurations of the RNA nanostructures for molecular simulation purposes.

Prerequisite:  A programming background and some mathematical maturity would be good.  Interest in biochemistry and nanotechnology would be a plus, but it is not compulsory.

Instructor (name and email): AbdulMelik Mohammed [email protected]

Topic presented by: instructor

Topic available: yes

Topic #26: Studying the Integration of Migrants in Germany Based on Facebook Interests


Given the recent influx of migrants into Germany, we want to use Facebook data to study which sub-groups are more or less integrated. As a proxy for the level of integration we propose to monitor the degree to which migrants have interests in local interest, such as festivities, customs or sporting events, compared to interests related to their own cultural heritage. We will use audience estimates from Facebook’s advertising profile which, in a nutshell, give estimates of how many Facebook users match certain criteria.

The project consists of three phases

(i) Data collection – the first part of the project is to collect data from Facebook’s advertising platform. This is done using a Python script to access an official API.

(ii) Analysis of the data – we will slice the data by gender, age and other dimensions to predict an “integration score” for different segments of migrants. This can be done using either Python or R.

(iii) Documenting the results – to facilitate potential submission to a scientific conference, use of LaTeX for typesetting would be preferred.

The project offers the opportunity to learn about interdisciplinary data science research, in particular in the domain of Computational Social Science.

Prerequisite: The project requires strong programming skills, preferably in Python. Experience with data analysis in Python or R would help. Knowledge of LaTeX is a further plus.

Instructor (name and email): Kiran Garimella ([email protected] ), Ingmar Weber ([email protected]), Emilio Zagheni (UW)

Topic presented by: instructor

Topic available: yes

Topic #27: Reading and implementing recent algorithmic papers

Background: This project is for mathematically/algorithmically inclined students to read, reflect, and write a report on, in their own words on some of the recent advances in algorithms. An implementation would also be cool, as appropriate.

Project1: Distance sensitive hashing

Project2: interesting reads from Greg Valiant & co (arxiv and conference versions):


Project3: From Petteri’s team


If you are interested, please go through the papers, select one project and contact Petteri about your interest.

Prerequisite: Interest in mathematics, algorithms and passion for implementing complex algorithms.

Instructor (name and email): Petteri Kaski ([email protected])

Topic presented by: instructor

Topic available: yes

Topic #28: Translation invariance in deep convolutional neural networks


Modern deep neural networks (DNN) reach phenomenal, superhuman performance in image classification tasks. However, their inner life is not understood all that well. Among other things, they easily stumble into image modifications that are trivial to us humans: for instance, merely translating the input image often leads to large changes in predictions . In this project, you will implement a DNN image classifier based on a high-quality downsampling operation (using, e.g., the Lanczos filter) and compare the resulting network’s resilience to translation and other simple image modifications against standard max-pooling.

Prerequisites: Good software engineering skills, basic experience with training neural networks, Python.

Instructor (name and email): Prof. Jaakko Lehtinen,  [email protected]  

Topic presented by: instructor

Topic available: yes

  • No labels

  File Modified
No files shared here yet.

[Feedback and Support]

Print This Page

[About This Site]

Stack Exchange Network

Stack Exchange network consists of 174 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Visit Stack Exchange

  1. Log In
    Sign Up

  2. current community

    • Artificial Intelligence


    • Artificial Intelligence Meta

    your communities

    Sign up or log in to customize your list.

    more stack exchange communities

    company blog

    • Tour

      Start here for a quick overview of the site

    • Help Center

      Detailed answers to any questions you might have

    • Meta

      Discuss the workings and policies of this site

    • About Us

      Learn more about Stack Overflow the company

    • Business

      Learn more about hiring developers or posting ads with us

By using our site, you acknowledge that you have read and understand our Cookie Policy , Privacy Policy , and our Terms of Service .

Artificial Intelligence

Sponsored by

Sponsored logo

Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. Join them; it only takes a minute:

Sign up

Here’s how it works:

Anybody can ask a question

Anybody can answer

The best answers are voted up and rise to the top

What are the latest ‘hot’ research topics for deep learning and AI?

Ask Question

up vote
down vote



I did my Master’s thesis on Deep Generative Models and I’m currently looking for a new subject.

Q: What are the “hottest” research topics that are taking a lot of attention of the deep learning community lately?

A few clarifications:

  • I did look through similar questions and none of them answered my
  • I come from a pure mathematical background, I only
    transitioned into deep learning a year ago, and my research on
    generative models was mostly theoretical. Which means, most of my
    work revolved around structured probabilistic models, and approximate
    inference. That said, I have yet to explore real world applications of deep learning.
  • I did my homework before posing the question. My goal was to get ai SE’s input on the matter and see what people are working on.
deep-learning ai-field

share | improve this question

edited Mar 23 at 20:26



asked Mar 21 at 14:45

Achraf Oussidi


  • 1

    NAS Net is really cool. They used a neural network to optimize the structure of a neutral network
    Mar 21 at 18:17

  • I’m leaving open since the OP did review similar questions and didn’t find an answer. That said (without having recently reviewed the potential duplicates) it would be good to try to distinguish this question as much as possible from the previous questions.
    –  DukeZhou
    Mar 23 at 20:25

add a comment  | 

2 Answers




up vote
down vote

Well, there’re certainly a lot of areas where you can contribute in research. Since you’re saying you did a Master Thesis in deep Generative models, I assume you’re comfortable in Machine and Deep Learning.

Digital Epidemiology is one of the areas where you can certainly apply deep learning. It’s still a relatively new field compared to other branches of computational biology. An example would be to see the impact of online digital record on the prediction and further prevalence of diseases.

Such online record can be received from different search engines, social media sites, and sometimes Government agencies. For Example, you can see here an example of search term “Skin Cancer” and the corresponding record shows the interest of this term across the Globe, this data can be used to find new Hypotheses. For example, if the data shows that we have more interest from a specific region of the world/country, that may show that the specific disease is more common in that region/part/country of the world. Similar hypotheses can be built, drawn and tested. And For sure,deep learning can improve the accuracy of traditional models used in validation of such Hypotheses.

Another interesting area of research may be the comparison of Long Short Term Neural Networks against the traditional time series models. I don’t believe there exists a mature research on this area. Maybe you can start from this good blog here .

Signal Processing maybe another very interesting, and also very practical area to build and validate theories on top of Deep Learning models. However, Mathematics in Signal Processing can be pretty hard to get. All of these options, however will require you to work in a team with people from the specific domains. That is if you want to produce high quality research.

Other areas may be NLP , especially the case of language translation from Hindi to Urdu or Persian, online digital marketing, behavioral sciences, manufacturing and investment. Specific areas of research maybe improved further if you know experts from these fields.

share | improve this answer

answered Mar 22 at 16:26

Sibghat Ullah


  • Thank you for your answer. Great suggestions! As a matter of fact, I have briefly worked with LSTMs. They can be used to generate images with long time dependencies in PixelRNN. As for signal processing, I do come from a math background so that’s actually my cup of tea.
    –  Achraf Oussidi
    Mar 22 at 17:34

  • 2

    Welcome to AI and thanks for contributing. We’ve had some previous questions about using current AI methods in the medical field. (Too numerous to list here, but if interested in field some of them, just search for "medical" on this stack.)
    –  DukeZhou
    Mar 22 at 19:48

  • @DukeZhou Thanks for providing insightful knowledge.Always save the human civilisation.Good work Captain.
    –  quintumnia
    Mar 22 at 20:01

  • The question requested research topics rather than applications of existing topology, even though the later is certainly important in the overall process of technology advancement and your answer has some creative suggestions for that separate question.
    –  Douglas Daseeco
    Aug 4 at 1:27

add a comment  | 

up vote
down vote

The hottest topics are not necessarily the most promising, just as the hottest stock in the ad feed of your browser is not necessarily the smart bet.

If you are interested in what seems promising, there are hundreds. These are a few of note.

Residual Attention Networks

This improvement over RNNs is interesting because it is thought to be more resource thrifty and therefore converges with less time and parallelism.

  • Residual Attention Network for Image Classification, Fei Wang et al, 2017
  • Look Closer to See Better: Recurrent Attention Convolutional Neural
    Network for Fine-grained Image Recognition
    , Jianlong Fu et all, 2017
  • VideoWhisper: Toward Discriminative Unsupervised Video Feature Learning With Attention-Based Recurrent Neural Networks, Na Zhao, 2017
  • Hierarchical Recurrent Attention Network for Response Generation
    Chen Xing
    , Wei Wu et al, 2017

As hot in the research field as the Attention strategy is becoming, it is not as promising as other research directions not as under the spotlight.

Automated Development of Models

Many research efforts are interested in

  • Hierarchical Self-organizing Maps System for Action Classification.
    Z Gharaee, P Gärdenfors, M Johnsson, ICAART, 2017
  • Runtime Modelling for User-Centric Smart Cyber-Physical-Human Applications, Lorena Castañeda Bueno, 2017

Signal Topologies That Support Equilibria

Many ignore the importance of GANs, not because they can do interesting things with images but because of how they deviate from the simple topology of signal path where convergence on a trained set of parameters is achieved over a one-dimensional array of layers and blocks of layers.

If you look at the discriminative and generative components in GAN design, described in some detail in Understanding GAN Loss function , there is a unique topology that creates balance between two components that its designer call adversarial because they play opposing roles. However, their action in the system is collaborative to create a balance whereby the objective is achieved. To me, this is the most promising direction in AI today.

Recall chemical equilibria. The dissolving and crystallization of a solution above saturation is discrete and random at the molecular level but continuous and predictable at the visible level. This is true of many biological stasis systems, including metabolism and mental state.

Designing signal topologies that support additional forms of collaboration, where each component learns its roll so that the aggregate system learns its function is more likely to synthesize what we recognize as intelligence. Rules based systems and networks that, although they may be deep, are yet one dimensional in terms of signal flow will not, by themselves ever approximate the most notable features of the human brain.

Parallel Processing Using GPUs as DSPs

VLSI implementations of spiking networks is important, and there are now implementations such as that leverage GPU hardware acceleration to investigate them without access to the VLSI chips being developed by large corporations.

Speech Recognition and Synthesis for End-to-End TTS

The recent emergence of excellence in synthesis using systems such as Google’s WaveNet have opened the door to more accurate TTS (text to sound) applications, such that it is probably a good time to become an expert in voice recording for use in training example sets but a bad time to start a custom speech production house using live speakers.

Automated Vehicles

Automated vehicles of various types need experts in vehicle physics, automotive manufacture, aeronautics, and consumer products for a wide range of vehicle types with strong economic and safety incentives driving semi-automation and full automation.

  • Mars landers
  • Consumer drones
  • Industrial drones
  • Military drones
  • Passenger aircraft
  • Passenger automobiles
  • Limos
  • Trains
  • Wheel chairs
  • Delivery vehicles
  • Automated food distribution
  • Nuclear plant repair robots
  • Electrical distribution repair robots
share | improve this answer

edited Sep 21 at 0:47

answered Aug 2 at 7:23

Douglas Daseeco


add a comment  | 

Not the answer you’re looking for? Browse other questions tagged deep-learning ai-field or ask your own question .


6 months ago


1,927 times


18 days ago



Understanding GAN Loss function



Issues with and alternatives to Deep Learning approaches?


What is the difference between machine learning and deep learning?


Using crowdsourcing for deep learning


Why are deep neural networks and deep learning insufficient to achieve general intelligence?


Gradient free training methods for deep learning


What is the difference between Encoders and Auto-encoders in Deep Learning?


Research Areas: Deep Learning/Machine Learning in Manufacturing industry?


deep learning, memorizing the input data not learning


Is topological sophistication necessary to the furtherance of AI?


Research papers and articles for counting the people in the image of a crowd

Hot Network Questions

  • Word for the thief’s key that can unclock anything

  • What is a “halachic annulment” of a marriage and when is it employed?

  • We can’t run this bar without her. Not, and raise two kids

  • Has an individual ever purchased an entire country?

  • Would every particle in the universe not have some form of measurement occurring at any given time?

  • The location of an object is gauge dependent. Therefore, it’s not measurable?

  • Is brute force a probable threat even if you enable CAPTCHA and rate limit logins?

  • Would treating critical hits as "called shots" have unintended consequences?

  • Adventure with infinite series, a curiosity

  • Does it violate academic social norm to email someone around midnight?

  • How to determine treasure for characters who aren’t materialistic?

  • Is this map realistic/believeable?

  • What does "bring a vote to the floor" mean?

  • My 3 year old daughter thinks she is white. Should I tell her she’s not?

  • Biggest possible battleship that can be built?

  • Will working with a publisher help me prevent my game from being cloned?

  • How do you normalize a taboo custom in a setting that most readers would not agree with?

  • How to set default value for configuration field with type time

  • What is the reasoning behind this exponents question?

  • Detect ring signal using a low pass filter

  • Repeating "them" in "support them rather than abandon them"

  • Integer midpoint

  • I changed my "HOME" variable and now cannot find "~/.bash_profile" to change it back

  • Hereditarily indecomposable groups

more hot questions

question feed

Artificial Intelligence Stack Exchange works best with JavaScript enabled