These Past Weeks in Science & Tech - 001

April 2, 2017 by guidj

I tend to read quite a bit. I read journals, blogs, and sites on technology, AI, learning (with and without machines), and technology. These include renowed ones, like ACM Tech News, MIT News and Technology Review, and aleatory ones that I find on the web through platforms like Medium. As someone who enjoys reading, and writing, I figured I could use summary notes on the things I learn about. And so, I decided to do that on this blog. This is the first edition of what I am labeling as a “summary notes” type of posts, which I will be writing every few weeks.

Tentatively, there will be three sections to these posts:

  • Research/R: here, I will refer to news on reasearch that has had impact on bringing us closer to understanding, or achieving something in a field.
  • Development/D: here, I will refer to recent developments that are closer to practical solutions that could soon have an impact on people’s lives.
  • Notes/N: in this section, I will scribble my personal thoughts on things I have recently learned or thought about, and/or played with. Expect randomness.

My aim with this is to keep a log on what I have learned in terms of new advances and breakthroughs from news sources I consume on science, and technology, and encourage you, the reader, to learn, and think about them as well. Hence, I shall provide a reference to the sources, for your delight.

Granted, there is a question of recency of information. The way I see it though, scienctific and technological discoveries always have value. And considering that it can take over a year for a research publication to be published from the moment it’s submitted, I believe that posting every few weeks provides good balance because (a) the information will still be relatively recent, and (b) it gives me time to digest, and condense different topics, to provide a more balanced summary on the ones I find most interesting. Or as Spock would say…

spock-fascinating

/R

Multi-tasking Neural Networks

Training a neural network algorithm is a task of figuring out the weights that will minimize the error of the system when classifying input instances fed to it. Generally, this means that each trained network is well tuned to perform well on the single task that is was trained for, i.e. if we take a trained network and try to get it to learn weights for a new problem, a learning algorithm will tune it for the new problem at the expense of it not being as good at solving the previous task, if useful at all. This is called the forgetfullness problem.

DeepMind, the company responsible for the neural network, DQN, that managed to get computers to play and win the Atari games in 2014, claims to have solved this problem of forgetfullness in neural networks.

Using what I would described as weighted learning, they devised a new training strategy in which the training algorithm tries to avoid modifying the neurons that are super critical for a previously learned task. This way, the other less relavant neurons for task A learn to solve task B, leaving the network’s ability to solve task A relatively intact.

This could prove very useful in bringing us a step closer to general intelligence. In software engineering, we also tend to have single purpose applications. But we also have general purpose ones, like databases. The same applies in machine learning. Getting algorithms to do more than one thing can potentially help us understand how our brains work to make us capable of learning a great deal of things, like walking and talking, without compromosing things we learned in the past.

Source: Enabling Continual Learning in Neural Networks

Roaring up the wrong tree

Know how in science, everything is up for debate until we prove one thing, given existing evidence and methods of proof, and hold to that until we’re proven wrong, given new evidence and methods of proof? It’s the fun part of discovery, after all.

Well, it appears that the classification system for dissanours is gonna have to be revised. Researchers from the University of Cambridge in the UK have published work that suggests that the way dinassour families have been arranged up until now might need to be corrected.

To keep this brief, there were two kinds of dinassours: bird-like, and lizard-like ones. This classification is done according to their hips. I’ll leave the latin names for the specialists. According to the work of Dr David Norman and his colleagues, a sub-branch of the lizard-like dinnasours are all too similar to the bird-like dinassours. In fact, the research suggests that both evolved to be similar, only at different times. Consequently, the team proposes that both groups be combined to form a new group, that will be distinguished from the other lizard-like dinassours. After all, hips don’t lie.

Go on and add this to the list of things no longer valid since you studied it in high-school, next to the how many planets we have in our Solar System. Remember to check your facts before you have any kind of talk with any children. They’ll probably have more updated insights than you, and I.

Source: New study shakes the roots of the dinosaur family tree

/D

Crowd-sourcing intel in emergencies

Though there are issues surrounding privacy (e.g. location sharing), and spread of (mis)information within social networks, especially in times of panic, there can also be opportunities to leverage their existence for social cuases. A team of researchers from Universidad Carlos III de Madrid (UC3M), which includes Prof. Díaz, have published a research on work they’ve done to mine tweets for information that could be helpful for disaster and emergency response teams.

The researchers made use of ontologies, and came up with a method wherein one can define the domain and type of information they’re interested in, e.g. places, and the system will mine social feeds to extract data points it deems relevant for the situation. The idea is to bring what’s out there to the disaster response planners in an actionable format.

Perhaps this could work in conjunction with other emergency services, such as International SOS, or Facebook’s emergency check-in to let authorities know what what’s happening on the ground, to better prepare and coordinate responses. One can easily see drones being thrown into this mix as well. While these initiatives have tangible value, they also highlight the need to ensure adequate privacy and secuirty of systems for such scenarios.

Source: New multi-device system for handling emergencies with information from social networks

Flowtune: It’s a packet’s market

Scheduling of tasks, and resources is hard problem to solve. Solutions vary, with trade-offs between control and efficiency of allocation, and usage efficiency. Micro-services, either for live, real-time systems, or batch tasks, often compete in a data center for resources. It’s the responsibility of a scheduler to assign processor, memory, and network share to each system. This is where we need optimizations, because a bad solution means wasted resources, for which you will probably still need to pay for. Graduate student from MIT, Jonathan Perry, along with his advisors, and partners, presented Flowtune, a network bandwidth allocation system for data centers.

Unlike the traditionally used Transmission Control Protocol (TCP), Flowtune adjusts the bandwidth assignment to services according to their expected return. Think of each service as an investment, and the bandwidth as the investment value we will put down. So, a service with higher priority, like search, would have higher expected return, and would thus be allocated more bandwight compared to a service with lower expected return, like a batch job to update daily aggregate metrics. More importantly, the system is flexible enough to adapt over time to different market values of “assets”. I see resemblence here to a reinforcement learning strategy, where rewards are known, or user defined.

According to published benchmark results, it can be nine to 11 times faster than TCP on its worst day at completing requests. Should this be adopted by data centers, I think it’s safe to say our browsing experience is about to get a whole lot smoother.

Source: Faster page loads

/N

Singular(ity)

In this edition of “These Past Weeks in Science and Tech”, I shall touch upon the topic of singularity.

To the knowledge of many, current advances in machine learning and artificial intelligence have sprung debates and discussions of hyper intelligent machines, capable of either overthrowing humanity, or “enhancing” it. Or so the opinions go. Prominent figures, such as Elon Musk, and Stephen Hawkin, have voiced their concerns over this, in their AI Open Letter. The arguments laid in the letter seem balanced. We should expand the use of AI to the benefit of society, while researching potential pitfalls to prevent adverse consequences.

On the other hand, some figures, like Google’s Ray Kurzweil have now began to claim prediction of a point in time, within the next 50 years, when humans will blend with machines to become… “better”. I agree with Kurzweil when he says that our lives are already enhanced with our phones, search enginers, and voice assistants. I fact, our lives were enhanced when we found out about fire, and electricity. I have no doubt about the benefits of human enhancement. We have been doing so for people deemed to be at a disadvantage for a very long time. From eye glasses, to wheelchairs, we humans have a consistent track record of coming up with tools to aid those that need assistance, in some way. However, what’s being discussed now, when we talk about melding humans with machines, is not the same.

From social sciences, we know the makings of person are based on their experiences. The way in which we have dealt with our flaws, limitations, and problems, both internal and external, makes up the character DNA. Should we have infinite memory, infinite thinking capability, infinite knowledge, I wonder, what would we be then? And also, what for? Fighting to be a better species is without a doubt one of the most important causes we can have as individuals and groups, during our brief lives on this planet. What I question though is the line between enhancing and expanding. If we look at sci-fi, often times you’ll see a depiction of an advanced alien race that has a hive mind. Every creature is aware of the thoughts and experiences of every other create. They all connect with, and understand each other (and yet for some reason, their main purpose is to take over other planets because… [insert reason for invasion from favorite alien invasion film here]).

Perhaps if we did away with the clutter of having to translate our thoughts into words, and expressing them in a manner that may or may not get the message develiered accross the way we intended, we might have a better peace/conflict ratio, due to better understanding among us. Yet, when I think about it, I also ask myself, what then? What would we do, if from the day we were born we knew and understood everything? If there are no struggles to overcome; even small ones, like mastering a game, or learning to play an instrument by putting time and dedication into it. Cause let’s face it, no one needs the struggle of war, and disease.

My dreams for artificial intelligence, which sprung from many hours of sci-fiction in television and writing, have always been more humble. To me, the perfect AI would be one that would work like an oracle. We’d go up to it and ask, “What would be the best way to travel through space”, and it would proceed to list the technologies we should pursue that would give us higher chances of success in achieving planetary travel in very short time spans. Or we would ask instead, “How do we get rid of disease X”, and it would proceed to recommend chemical and biological agents that would have greater capacity of anihilating a diease, according to genetic code analyses it performed. Essentially, it would be a living library, what would be capable of connecting data points to derive crediable hypotheses to lead research to the next frontier. But it wouldn’t just be capable of crunching over existing data points. It would also write, and carry out exepriments on its on to learn. Because to me, one of our biggest limitations is our inability to learn from the past. To take what has been done, and understood, and build on top of it. This happens in science, and technology. But in small steps. And yet, somehow, I can’t help at think that dispite being small, we have moved forward. We’re moving forward right now, with new advances and breakthroughs happening every here and there, now and then. Still, it could be better.

To be clear, I support a state of being, in which one does need to work in order to eat. And I believe technology can get us there. People could use their time to pursue things while being driven by passion, rather than necessity. We have dozens of big problems to fix around the world. Hunger, safety, justice for gender and racial imbalances, etc. Social development is just as necessary and technological development to address them. Ultimately, I guess the true question being asked here is not whether by melding with technology we’ll be enhancing ourselves, but rather, if we’d be replacing ourselves for something more efficient. Whatever that would be. As a technologist, I welcome advances, and understanding. As a human though, I think we have to ask ourselves what the true worth of doing whatever it is our discoveries make it possible to do, is. Because it could mean the shadowing of our being, for another, which may or may not be “better”. For we do not simply thrive as singular individuals, but as a group. In fact, as different dimensions of groups. And to me, this is something to think about when we talk about singularity.