Science
Related: About this forumThe "Extreme Learning Machine."
I'm most definitely snowed in today, and am leafing through some issues of one of my favorite journals, Industrial Engineering and Chemistry Research and I came across a cool paper about one of my favorite topics, ionic liquids, that discusses the "Extreme Learning Machine."
Ionic liquids are generally salts of cationic and anionic organic molecules which are liquids at or near room temperature. Because they are generally not volatile, they can eliminate some of the problems associated with other process solvents, specifically air pollution. Although the term "green solvent" is probably over utilized with respect to ionic liquids, their very interesting potential uses have lead to a vast explosion of papers in the scientific literature concerning them. There are, to be sure, almost an infinite number of possible ionic liquids (and related liquids called "deep eutectics".)
My own interest in these compounds is connected with my interest in the separation of fission products and actinides in the reprocessing of used nuclear fuels, as well as an interest in their potential for the treatment of certain biological products, including lignins, a constituent of biomass that is quite different from cellulose, representing a sustainable route of access to aromatic molecules, as well as their possible use as radiation resistant (in some cases) high temperature heat transfer fluids.
Anyway, about the "deep learning machine:" The paper in question, written by scientists at Beijing Key Laboratory of Ionic Liquids Clean Process, State Key Laboratory of Multiphase Complex Systems, Institute of Process Engineering, Chinese Academy of Sciences, Beijing 100190, China, that I've been reading is this one: Ind. Eng. Chem. Res., 2015, 54 (51), pp 1298712992
The S?‑profile is a quantum mechanical factor describing the charge distribution of the surfaces of molecules and organic ions.
Here's the fascinating text:
Reference 24 is" Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489?501.
Hmm...the program needs to "learn" only a few parameters...
I always keep in the back of my mind Penrose's criticism of the concept of "artificial intelligence" (maybe because being a human being, I still want my species to be relevant) but I'm intrigued. Neurocomputing is a journal I've never accessed before, but when I can get out of here after this blizzard, I'm going to take a look at that paper which is apparently available at Princeton University's library.
I guess I'm a dork, but I find it all kind of cool...
phantom power
(25,966 posts)Instead of bumping down the gradient in the more traditional Boltzman-style. Also makes it nice for scale-out parallel computing environments like Hadoop or Spark.
It does require some compromises when training a multi-layer network. You end up optimizing one layer, then another... kind of like tightening bolts on a gasket, you circle around a few times until everybody is tight.
NNadir
(33,368 posts)Actually I'm fairly ignorant of the "nuts and bolts" of computer science, although I did see in Google Scholar that the "Extreme Learning Machine" concept has been around for quite some time.
I have a general feel for computational chemistry theory, the general concept of the Kohn-Sham theorem for "normal" density functional theory, and am trying to wrap my little brain around "orbital free density functional theory" but I don't know how it all works on a computational level.
From what I gather from poking around the internet, throwing in some of your lexicon, while snowed in - and I'm not likely to be able to leave this house without many hours of digging - the concept of "machine learning" involves using a "learning set," or "training set," computing a "best fit" using (hopefully convergent) calculations, and then weighting the results to approach a better fit.
For many chemists, myself certainly included since I am running out of time on the planet and will never have the time to learn the details, the "nuts and bolts," of computational science it's all rather like ordering a meal in a fine restaurant. You don't know how the food was prepared, but you enjoy the taste anyway.
My youngest son, still in high school, is considering a career in Materials Science, and I hope he will be inspired to learn more about these important "nuts and bolts" than his old man did.
Thanks for your stimulating comment!
Massacure
(7,497 posts)I graduated with a degree in computer science, and I took an artificial neural networks class that one of the professors only offered one session of every other year. It was basically a graduate level class toned down a bit for upper level undergraduate students. I'll never forget one of the homework assignments we had - given the size of a bunny's pupil, predict its age. The grade we received was dictated how close our predictions were. Our professor had to reassign the homework assignment to us because most of us were so far when we submitted it the first time around. I reminisce about it now, but that was a stressful assignment back when I was working it. Finding algorithms that work for the training involves a lot of trial and error.
Anyways, your comments about using a learning set to train the network and using the results to tune it is spot on. One of the caveats though is that you don't want an algorithm that is too convergent, otherwise the network becomes too trained. This is particularly true when data points gradually shift over time - the network needs to be able to "forget" to some degree what worked in the past. A good example of this might be a network taught to predict the winner of a a football game played in the NFL. The NFL has become more offense oriented and offenses pass oriented over time, so an algorithm that was trained on games from this season may work really well for games next season, but be awful at predicting games ten years from now.
jakeXT
(10,575 posts)For example a spam filter that decides between a spam email and a normal email.
https://en.m.wikipedia.org/wiki/Naive_Bayes_spam_filtering