Written in November of 2022. Thoughts from someone who just begun doing AI research this year.
Research is not published in a vacuum. Each journal/conference is a community who care about specific problems that vary depending on the state of the field each year. Submitting your research to these communities means you need to make sure these people clearly understand:
1. The motivation/importance of the problem you are solving,
2. Your new idea,
3. Empirical and theoretical results + implications of your work.
Where to begin research? You need to identify problems yet to be solved or studied.
Ask professors or other researchers what interesting problems they think are left to be solved.
Ask yourself: What are we missing in our pursuit towards general intelligence?
Examples: Overcoming catastrophic forgetting in networks, developing memory systems in networks, building continual learning frameworks, creating multi-modal networks, multi-task learning, etc.
If your research is more applied you might ask yourself: What tools, datasets, new architecture designs, or applications of AI have yet to be made? Do you have the resources and skills to outperform the current state of the art in some way?
It is often good to find summary papers that compile a list of the important papers on a subproblem in the field. This will give you a broad overview of the problems, framing, and approaches that have been tried to tackle the problem.
Another good place to look for papers to read during your literature review is simply going to the conferences you hope to publish in and read what has been submitted on OpenReview or accepted in the conference.
There is skimming a paper and then there is engaging with it. To some extent, there is a benefit to skimming a lot of papers. They give you a broad overview of the topology of a specific research area. However, only deeply understanding the motivation, framing, key ideas, experiments, implications and weaknesses of a paper will allow you to start piecing together a picture of what someone else has researched.
Don't be discouraged if you cannot identify weaknesses or instantly think of new ideas after doing a lot of literature review. Sometimes the connections and gaps in the ideas are not immediately clear but have to be digested. Give your brain the time to digest ideas. A professor I conversed with told me certain works he published stemmed from ideas that took over a decade to digest.
Ideas exist in another dimension humans simply tap into. Getting ideas from this other realm is out of your control. It is serendipity when it occurs. Exercise the skill to tap into the realm of ideas in new ways. There is no one way.
There are many ways of thinking about things. Some researchers think of a neural network as performing some form of compression on an input to yield a learned representation. Others think of the evolution of the learned representation in a network through the lens of ODEs. Some researchers think inductive biases or inspiration from classical algorithms can improve networks. Others push for more end-to-end learning paradigms. Who is to say one is more correct than the other? The singularity still awaits us. Be bold in your thoughts and approaches.
When looking at plots, theorems, or results in a paper don't take any written interpretations of what is going on and wholly accept them. Come to your own conclusions. This was an experiment someone ran. What do you think it tells you? Then ask how the experiment was run and whether that could bias the results or constrain where their/your analysis can be applied?
Empirical results (i.e. metrics that you track while training a network) or the performance of the network act as the evidence for the claims you make.
Research is not a straightforward, linear process. Ideas you read of, think of, and implement will be revisited several times in weeks or months away from now.
Come up with narratives you can bring forward to the research community. If it sounds compelling enough, convincing enough, is well motivated, and could have incredible implications then that's a strong narrative. This narrative will now act as your hypothesis. Go check through experiments if it holds any water.
This might sound suspect. How can thinking of "narratives" be an epistemically sound way to do research? Well, you would be surprised how much this actually happens. Professor Hinton's GLOM paper is a good example of this. This paper has no empirical results. No code was written. It's just Hinton's ideas and a narrative. Of course, Hinton is of a stature where he can actually put a manifesto of this form out there. Not everyone can publish a paper like this but we can still learn to try thinking from this perspective.
Have the confidence in your thoughts and skills to follow through in doing the mathematics or writing the code for the experiments you think of. Only by doing so can you see where your ideas lead you to.
Absorb as much mathematics as you can and try to think of how it relates to neural networks and ideas in AI. It isn't just learning the math that is important (of course, understanding things with rigour helps) but it is in the act of trying to interpret it and relate it to AI that the magic happens.
The simplest way to do this is just consider simple network architectures and question what could the math help you change or improve. Could it improve the optimizer, the gradient descent step, could it compress/sparsify/reduce your inputs, could it act as a metric you could track to tell you about some aspect of the network (such as the stability, sensitivity to change, strength/norms of the weights or features, etc.), could it help you analyse the learned representations, disentangle them, perturb them, put theoretical bounds on them, and much more?
I cannot stress the importance of doing this exercise. You won't be able to formalize your ideas without it so become a magnet for mathematics.
The Matrix Cookbook is your best friend when dealing with mathematical derivations. Spend time browsing it and sometimes just do it for fun till you know what the book offers. In fact, some researchers claim if you sleep with it under your pillow at night you will absorb some of the insights from the book (and most certainly a kink in your neck as well... I won't comment if this is spoken from experience).
Some weeks will be good. Others not so good. Some work sessions with other researchers or your advising professor will feel incredible while others will feel like you are stuck on a roadblock/have limited progress. It's fine. This is a marathon, not a sprint. Trust the process and put in the work.
As you read papers, do experiments, have conversations, etc. don't forget to become organized. Organize your thoughts down. Written thoughts are saved thoughts. Any thoughts not written will be gone in the next second and may never come back, ever. Imagine the gravity of this fact. You may have THE next idea but if you don't put it down in writing it may disappear into the aether. Therefore, write down good explanations of ideas, intuitions or interpretations of phenomena, elegant derivations, experimental results, and if you have a conversation try to summarize your insights once done.
I have found that when I am trying to understand a problem, develop a solution, or go over an idea it helps to speak to people at three different levels relative to you:
1. A beginner on the topic/subject area.
They will force you to explain everything from the ground up and their questions will test if you understand the foundations.
2. A person at your level on the topic/subject area.
Your conversation with them will try to balance your view and offer a diversity of opinions.
3. A person who is much more knowledgable on the topic/subject area.
By speaking to the two people before, you will be equipped for this conversation and will gain more from it. Furthermore, you will get insight on whether the problem is of value to others in the field along with very different thoughts or ideas about directions to explore.
I will update this post while I am still in my first year of AI research.