Jekyll2023-10-09T21:32:27-04:00https://jaewonchung.me/feed.xmlJae-Won’s BlogJae-Won's BlogJae-Won ChungUsing Microsoft python-type-stubs with Pyright2023-09-17T00:00:00-04:002023-09-17T00:00:00-04:00https://jaewonchung.me/technical/Using-Microsoft-python-type-stubs-with-Pyright<p>Python type annotations allow static type checking, so that you can catch obvious <code class="language-plaintext highlighter-rouge">AttributeError: NoneType object has no attribute ...</code> in your editor.
They also allow better code completion, because in many cases, type checking tools can infer the type of the object based on return type annotations of functions or methods.
However, not all libraries (especially the ones that were created before Python type annotation got established) have type annotations.</p>
<p>That’s why <em>type stubs</em> exist.
They have <code class="language-plaintext highlighter-rouge">.pyi</code> extensions and are like C headers, which only declare class, function, or method names and their parameter types without implementation.
These type stubs do not have to be coupled with the actual library.
So virtually anyone can just create type stubs for an existing library and ship it separately.</p>
<p>Microsoft’s <code class="language-plaintext highlighter-rouge">pylance</code> ships with type stubs bundled for popular libraries without native type annotations.
But especially for large libraries type stubs cannot be perfect, so they have a repository called <a href="https://github.com/microsoft/python-type-stubs"><code class="language-plaintext highlighter-rouge">python-type-stubs</code></a> to collaboratively work on creating type stubs together with the community, and these stubs are bundled together with <code class="language-plaintext highlighter-rouge">pylance</code>.</p>
<p>However, <code class="language-plaintext highlighter-rouge">pylance</code> is closed source, and is only available inside VS Code.
As a Neovim person, I instead have to use the open-source version of <code class="language-plaintext highlighter-rouge">pylance</code>, which is <a href="https://github.com/microsoft/pyright"><code class="language-plaintext highlighter-rouge">pyright</code></a>.
However, by default, <code class="language-plaintext highlighter-rouge">pyright</code> doesn’t ship with <code class="language-plaintext highlighter-rouge">pylance</code>’s type stubs.</p>
<p>So the question is, <strong>how do I use <code class="language-plaintext highlighter-rouge">python-type-stubs</code> with <code class="language-plaintext highlighter-rouge">pyright</code></strong>?
It’s actually simple enough, but at the time of writing, it seems like nowhere on the Internet just has a straightforward guide on this.</p>
<p>Say you have a Python project <code class="language-plaintext highlighter-rouge">proj</code> managed with <code class="language-plaintext highlighter-rouge">git</code>.</p>
<p>Add <code class="language-plaintext highlighter-rouge">python-type-stubs</code> as a git submodule under the directory <code class="language-plaintext highlighter-rouge">stubs</code>:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span><span class="nb">cd </span>proj
<span class="gp">#</span><span class="w"> </span>Assuming you have GitHub SSH authentication <span class="nb">set </span>up.
<span class="gp">$</span><span class="w"> </span>git submodule add git@github.com:microsoft/python-type-stubs stubs
</code></pre></div></div>
<p>Then, point <code class="language-plaintext highlighter-rouge">pyright</code> to the stubs inside the submodule.</p>
<p>If you’re using <code class="language-plaintext highlighter-rouge">pyproject.toml</code>:</p>
<div class="language-toml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">[tool.pyright]</span>
<span class="py">stubPath</span> <span class="p">=</span> <span class="s">"./stubs/stubs"</span>
</code></pre></div></div>
<p>If you’re <em>not</em> using <code class="language-plaintext highlighter-rouge">pyproject.toml</code>, you need to have <code class="language-plaintext highlighter-rouge">pyrightconfig.json</code> in the root of your workspace:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"stubPath"</span><span class="p">:</span><span class="w"> </span><span class="s2">"./stubs/stubs"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>When you see glitches in the type stubs provided by <code class="language-plaintext highlighter-rouge">python-type-stubs</code>, just post a PR fixing the issue.
When the PR gets merged, update the submodule (e.g., <code class="language-plaintext highlighter-rouge">git submodule update</code>).</p>Jae-Won ChungFor better type checking and completion.Advisors are like GPT2023-05-11T00:00:00-04:002023-05-11T00:00:00-04:00https://jaewonchung.me/research/thoughts/Advisors-are-like-GPT<p>I mean, my advisor is not a GPT model of course.
However, talking with my advisor is not like just talking to my friend or colleague.
Efficiently getting advice from him requires a certain mental model of how he thinks and acts, and I realized that it’s sort of similar to prompting GPT models.</p>
<h2 id="gpt-is-stateless">GPT is Stateless</h2>
<p>ChatGPT will remember the details of the conversation in the same thread, but that’s only because they cram in the entire conversation in their context window.
Outside that window, they are basically stateless; when you start a new conversation with ChatGPT, it’ll have an empty context and won’t remember anything about your previous conversations.</p>
<p>Advisors are often also stateless.
Not that they’re stateless intentionally or by design, but due to the sheer amount of things going on around them, it’s more convenient for students to assume them to be stateless and forget everything.
It’s just like how you don’t ask ChatGPT, “Hey I think I asked you something about NP-Hardness in another thread last month, do you remember that?”</p>
<h2 id="context-is-important">Context is Important</h2>
<p>That’s why initializing GPT’s context right is important.
You would have experienced some conversations with ChatGPT where you screwed up the initial description of your problem, and it takes more words than what would have taken if you had described it right in the first place in order to correct ChatGPT’s understanding.
In such cases you can just mutter “Crap.” and click ‘New Chat’ (because ChatGPT is stateless).
However, unfortunately, that’s not so easy if you were talking to your advisor.
Therefore, I try to make sure my advisor’s context is initialized with a concise and accurate picture of where my research is.</p>
<p>I think this is especially observable when I sometimes hear contradicting advice from my advisor.
Not contradicting with my opinion, but with his own advice in the past.
That’s probably because the contexts I gave to my advisor that led to those contradictory advices were inconsistent in some way.
Therefore, usually my next action is to prompt my advisor further to find out if there are any misunderstandings, either in the previous meeting or this one.</p>
<h2 id="fine-tuning-gpt">Fine-Tuning GPT</h2>
<p>I’ve been saying all along that advisors are stateless, but we all know that they’re not completely stateless all the time.
So, I like to think that they take one fine-tuning step at the end of every meeting, and their learning rate depends on how interesting the meeting was (and also on other things that I can’t control).
If I excite my advisor with some interesting observation or good result, they’re more likely to remember.
Otherwise, they probably won’t remember what happened during the last meeting.</p>
<p>In that sense, I think it’s an effective strategy to present my advisor with a concise summary of the end of the meeting.
That way they don’t have to summarize the entire meeting on their own for fine-tuning, but rather just directly use the takeaway messages I present.
For that, I set forth a couple TODO bullets that are rooted on the core takeaways of this meeting and roughly represent what my advisor can expect for the next meeting.</p>Jae-Won ChungThere are some similarities in talking with my PhD advisor and prompting ChatGPT.The Importance of Mentoring as a PhD Student2022-10-11T00:00:00-04:002022-10-11T00:00:00-04:00https://jaewonchung.me/research/thoughts/The-Importance-of-Mentoring-as-a-PhD-Student<p>Two months ago, I began mentoring a Master’s student who reached out to my advisor asking for collaboration opportunities.
I suggested an implementation project that aims to slightly expand the scope of my previous research work <a href="https://ml.energy/zeus">Zeus</a>, and the student decided to go for it.</p>
<p>As a mentor, my task was to collaboratively design solutions with my mentee, answer technical questions, provide feedback on design decisions and code, and figure out what pieces of knowledge the student was missing and either provide study material or come up with Google search keywords.
While I wasn’t doing any <em>actual</em> work, in that I wasn’t studying relevant technology or writing and testing code myself, the sequence of tasks was very difficult to perform satisfactorily because of the very fact that I wasn’t doing any actual work.
That is, without a complete picture of what’s going on, I was supposed to have better foresight than my mentee about what would happen if we were to proceed in a certain direction, or my mentee risked hitting a hard wall.
Eventually, when we officially merged the feature into Zeus, I felt very happy and proud that our collaboration worked out.</p>
<p>While the mentorship itself did not specifically push my ongoing research project forward, it indirectly helped with doing research, because I learned two important things from this experience.</p>
<p>First, I came to respect the weight of the advice my advisor gives me and understand the mental pressure of providing advice.
My mentee and I were simply developing a moderately-sized software feature, and I could reasonably predict what would happen along the way and what things would look like in the end.
Still, sometimes I wasn’t entirely sure about the advice I was giving to my mentee, but I anyway needed to at least <em>look</em> confident in order to give faith and motivation to my mentee.
Moreover, it was obvious that if I ended up pointing my mentee to a wrong path, my mentee would face frustration.</p>
<p>What makes it more difficult for my advisor is that we are doing research, which is inherently uncertain; you never know if the hole you’re digging is your grave.
Yet, PhD students expect their advisors to still provide advice that is roughly in the right direction.
I have come to understand that not even the best professors can know with confidence that the way the student is headed is a good direction, and how incredibly good my advisor is in that he usually provides advice that ends up being correct.</p>
<p>Second, mentoring helped me ask better questions to my advisor.
Recalling my mentoring experience, when my mentee asked for clarification about my suggestion, I sometimes failed to give good answers because I myself was operating based on logical guesses and inexplicable instincts.
I figured this would be the case for my advisor, too.
Thus, instead of interrogating my advisor with a bunch of why-questions, I started to instead ask questions such as “Is it because of A and B that you asked me to do X?”, essentially putting forth my guess of the reason behind his advice.
Then, my advisor would answer whether or not he agrees with what I said, or sometimes even make better suggestions when the guess I provided sparked something, and both cases led to highly productive conversations.</p>
<p>All in all, before this mentorship experience, my perception of mentoring was that it’s only good for me if it ends with some nice tangible outcome in a reasonable amount of time.
However, now I feel like the process of mentorship taught me a lot about the relationship between me and my advisor, and allowed me to improve the productivity of the meetings with my advisor.
Moreover, mentoring itself was an extremely rewarding process, where I could interact with enthusiastic junior researchers who are interested in my research.</p>Jae-Won ChungWhile mentoring is not the primary task of a PhD student, it seems to help in doing research.On Staying Confident2022-09-14T00:00:00-04:002022-09-14T00:00:00-04:00https://jaewonchung.me/research/thoughts/On-Staying-Confident<p>Last April, I submitted the manuscript of <a href="https://ml.energy/zeus">Zeus</a> to the Spring deadline of <a href="https://www.usenix.org/conference/nsdi23">NSDI</a>.
I was fairly confident about my research capability, since I managed to submit something that looks like a good paper by a very stringent deadline.</p>
<p>After a good three-week break, I came back to research in late May, exploring potential future directions.
I suggested some ideas and directions during meetings with my advisor, and basically everything was killed.
They were not just killed, but killed with very good reasons that seemed obvious in retrospect.
Quickly in several weeks, I started to feel very unconfident to the point where I was afraid to talk with my advisor, although I knew logically that killing bad ideas in early stages is only saving me time.</p>
<p>Then suddenly in mid-July, we were notified that Zeus was accepted to appear at NSDI.
My mood swung to the other end of confidence, especially because this was our first submission of the paper to any conference.
I began to prepare the camera-ready version, together with Zeus’s open source repository and homepage.
Finding new research directions was paused for a week or two, since we wanted Zeus to be posted on arXiv as soon as possible and have it collect citations from early on.</p>
<p>While polishing Zeus, I was thinking about what could be a follow-up work of it.
Then I came up with another idea, an idea I liked no less than all the dead ideas in June, and presented that to my advisor.
And he said it was very good.
Now I’m working on that idea, and just found last week that my hypothesis was true at least for a limited set of workloads, and the potential gains can be quite large.</p>
<p>Looking back starting from April, when I submitted Zeus to NSDI, me as a researcher did not change that much.
Most of the time I was on vacation, or was pouring time into polishing Zeus.
However, my level of confidence in terms of research capability fluctuated greatly, which didn’t make any sense.
If me in April and me today are similar researchers, there is no reason to be sometimes confident and sometimes not.
Rather, my confidence level should be determined by my best times, which is currently mid-July, when Zeus was accepted.
External factors, for example whether my ideas are well accepted by people, do not change my potential to do great work.</p>Jae-Won ChungWhy am I sometimes confident and sometimes not?Appending the PhD Mindset2022-05-12T00:00:00-04:002022-05-12T00:00:00-04:00https://jaewonchung.me/research/thoughts/Appending-the-PhD-Mindset<p>Say that your coworker makes this statement:</p>
<blockquote>
<p>The pizza served at Mani’s is literally the best in the world. They’re so good.</p>
</blockquote>
<p>What is the most appropriate answer?</p>
<ol>
<li>Hmm, how do you quantify the goodness of a restaurant? Is Google Maps stars a sufficient metric?</li>
<li>No, “good” does not mean “the best”. You should always think about the exact meaning of words when you speak.</li>
<li>You can’t just make such a statement. Do you have an argument to back that?</li>
<li>Lol yeah.</li>
</ol>
<p>After spending half a year as a PhD student, I start to understand when people say:</p>
<blockquote>
<p>Getting a PhD is not about acquiring a set of technical skills, but rather a specific mindset.</p>
</blockquote>
<p>I suppose there are many elements that consist such a mindset, including but not limited to:</p>
<ul>
<li>Excavating meaningful problems to solve and navigating the uncertain process of solving it (#1 above).</li>
<li>Communicating facts and arguments in precise language (#2 above).</li>
<li>Maintaining a critical view of relevant matter and accepting arguments after rigorous reasoning and observation (#3 above).</li>
</ul>
<p>I believe many will agree that none of these are neither easy nor quick to acquire.
One must imbue oneself with principles and constantly self-reflect and self-correct.
However, I think one should go one step further.
One should make a conscious effort so that no existing mindset is completely replaced by the PhD mindset; the PhD mindset must only <em>append</em> to the list of existing mindsets.
Then, one must distinguish situations when it is more appropriate to apply the PhD mindset and when it is not in a fine-grained manner.</p>
<p>Life, at least partly, can be viewed as a multi-task learning problem.
While acquiring new capabilities is important, one must make sure not to catastrophically forget other important things in the process, which may not always be easy especially when the new capability requires an immense amount of concentrated effort to learn.
However, I believe that such an effort is meaningful in advancing one’s maturity as a person.</p>Jae-Won ChungI heard that getting a PhD is about obtaining a certain mindset. Yet, the mindset should append, not replace.Halide: a language and compiler for image processing and deep learning2020-04-15T00:00:00-04:002020-04-15T00:00:00-04:00https://jaewonchung.me/study/code-generation/Halide<h1 id="halide">Halide</h1>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="https://halide-lang.org/">https://halide-lang.org</a></li>
<li>https://github.com/halide/Halide</li>
<li>Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines (PLDI’ 13)</li>
<li>Automatically Scheduling Halide Image Processing Pipelines (SIGGRAPH ’16)</li>
<li>Loop Transformations Leveraging Hardware Prefetching (CGO ’18)</li>
<li>Differentiable Programming for Image Processing and Deep Learning in Halide (SIGGRAPH’ 18)</li>
<li>Schedule Synthesis for Halide Pipelines through Reuse Analysis (TACO ‘19)</li>
<li>Learning to Optimize Halide with Tree Search and Random Programs (SIGGRAPH’ 19)</li>
</ul>
<h2 id="paper-summary">Paper Summary</h2>
<h3 id="halide-a-language-and-compiler-for-optimizing-parallelism-locality-and-recomputation-in-image-processing-pipelines"><strong>Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines</strong></h3>
<ul>
<li><strong>Motivation.</strong> Image processing pipelines are often graphs of different stencil computations with low arithmetic intensity and inherent data parallelism. It introduces complex tradeoffs involving locality, parallelism, and recomputation. Thus, hand-crafted code produced with tedious effort are often neither portable nor optimal.</li>
<li><strong>Solution.</strong> Halide decouples the <em>algorithm</em> (what is computed?) and the <em>schedule</em> (when and where?). From each schedule, the compiler produces parallel vector code and measures its runtime. It then searches for the best schedule in the tradeoff space using stochastic search based on genetic algorithm.</li>
<li><strong>Results.</strong> Generated code are an order faster than their hand-crafted counterparts. Automatic scheduling is quite slow and lacks robustness.</li>
<li>
<p><strong>Detail.</strong> Two-stage decision for <em>determining the schedule</em> of <em>each function</em>:</p>
<ul>
<li>Domain Order: the order in which the required region is traversed
<ul>
<li>sequential/parallel, unrolled/vectorized, dimension reorder, dimension split</li>
</ul>
</li>
<li>Call Schedule: when to compute its inputs; the granularity of store and computation
<ul>
<li>breadth-first/total fusion/sliding window</li>
</ul>
</li>
</ul>
</li>
<li><strong>Detail.</strong> <em>Compile steps</em> (all decisions directed by the schedule):
<ul>
<li>Lowering and Loop Synthesis: create nested loops of the entire process, insert allocations and callee computations at specified locations in the loop</li>
<li>Bounds Inference: from the output size, the bounds of each dimension is determined</li>
<li>Sliding Window Optimization and Storage Folding: look for specific conditions and apply</li>
<li>Flattening: flatten multi-dimensional addressing and allocation</li>
<li>Vectorization and Unrolling</li>
<li>Back-end Code Generation - only note GPU:
<ul>
<li>outer loop → inner loops divided into GPU kernel launches</li>
<li>inner loops are annotated in the schedule with block and thread dimensions</li>
</ul>
</li>
</ul>
</li>
<li><strong>Detail.</strong> Stochastic <em>search</em> based on genetic algorithm
<ul>
<li>Hint hand-crafted optimization styles through mutation rules. These include mutating one or more function schedules to a well-known template.</li>
</ul>
</li>
<li><strong>Thoughts.</strong>
<ul>
<li>The increase in performance is natural, since Halide invests a lot of time in optimization. The real contribution seems to be that Halide formulated the axes of optimization and exposed an easy handle that helps users search the space.</li>
<li>Generated CUDA kernels don’t seem to use CUDA streams or asyncronous copies.</li>
<li>Requries block and thread annotations provided by the programmer.</li>
<li>Without the hand-crafted mutation, I suspect that performance will greatly suffer.</li>
<li>Schedule search could be learned. Monte Carlo tree search maybe? RL will work too, as in NAS.</li>
</ul>
</li>
</ul>
<h3 id="differentiable-programming-for-image-processing-and-deep-learning-in-halide">Differentiable Programming for Image Processing and Deep Learning in Halide</h3>
<ul>
<li><strong>Motivation.</strong> Existing deep learning libraries are inefficient in terms of computation and memory. Also, in order to implement custom operations, the user must manually provide both the forward and backward CUDA kernels.</li>
<li><strong>Solution.</strong> Extend Halide with automatic differentiation (<code class="language-plaintext highlighter-rouge">propagate_adjoints</code>).</li>
<li><strong>Results.</strong> GPU tensor operations 0.8x, 2.7x, and 20x faster than PyTorch, measured with batch size 4.</li>
<li><strong>Detail.</strong> Two special cases of note when <em>creating backward operations</em>:
<ul>
<li>Scatter-gather Conversion: When the forward of a function is a <em>gather</em> operation, its backward is a <em>scatter</em>, e.g. convolutions. This leads to race conditions when parallelized. Thus, the scatter operation is converted to a gather operation.</li>
<li>Handling Partial Updates: When a function is partially updated, dependency is removed for some indices. If two consequtive function updates have different update arguments, the former’s gradient is masked to zero using the update argument of the latter.</li>
</ul>
</li>
<li><strong>Detail.</strong> <em>Checkpointing</em> is already supported but in a more fine-grained manner through schedules: <code class="language-plaintext highlighter-rouge">compute_root</code> for checkpointing, <code class="language-plaintext highlighter-rouge">compute_inline</code> for recomputation, and <code class="language-plaintext highlighter-rouge">compute_at</code> is something in between, e.g. tiling.</li>
<li>
<p><strong>Detail.</strong> <em>Automatic scheduling</em> (only note GPU, ordered by high priority)</p>
<ol>
<li>For all scatter/reduce operations, always checkpoint them and tile the first two dimensions and parallelize computation over tiles. Other types of operations are not checkpointed at all.</li>
<li>Apply <code class="language-plaintext highlighter-rouge">rfactor</code> for large associative reductions with domains too small to tile.</li>
<li>If parallelizing cannot but lead to race conditions, use atomic operations and parallelize.</li>
</ol>
</li>
<li><strong>Thoughts.</strong>
<ul>
<li>Again, automatic scheduling could be better. The scheduler in this work is filled with hand-crafted heuristics.</li>
<li>The paper doesn’t talk about the time needed for automatic scheduling. Probably it took pretty long. Then we can’t use this for deep learning research; training just a single hyperparameter configuration is already burdensome. Deployment has some hope though.</li>
<li>The ‘deep learning operations’ this paper conducted experiments on (grid_sample, affine_grid, optical flow warp, and bilateral slicing) are relatively uncommon compared with matrix multiplication or convolution. This aligns with their claim that Halide is advantageous when you have to <em>implement custom operations</em>.</li>
</ul>
</li>
</ul>
<h3 id="learning-to-optimize-halide-with-tree-search-and-random-programs">Learning to Optimize Halide with Tree Search and Random Programs</h3>
<ul>
<li><strong>Motivation.</strong> Existing autoschedulers are limited because 1) their search space is small, 2) their search procedures are coupled with the schedule type, and 3) their cost models are inaccurate and hand-crafted.</li>
<li><strong>Solution.</strong> Use 1) a new parametrization of the schedule space, 2) beam search, and 3) additionally employ a learned cost model trained on ramdomly generated programs.</li>
<li><strong>Results.</strong> Deep learning benchmarks on GPU were not reported at all! Those on CPU with image size 1 x 3 x 2560 x 1920 are claimed to outperform TF and PT and be competitive with MXNet + MKL, but the paper mentions no concrete numbers.</li>
<li>
<p><strong>Detail.</strong> <em>Parameters</em> of the schedule (underlined). Beginning from the <em>final</em> stage, make two decisions per stage to build a complete schedule:</p>
<ol>
<li><em>Compute and storage granularity</em> of new stage. An existing stage can be split, creating an extra level of tiling. <em>Tile sizes</em> are also parameters that should be determined.</li>
<li>For the newly added stage, we may parallelize outer tilings and/or vectorize inner tilings and <em>annotate</em>.</li>
</ol>
</li>
<li><strong>Detail.</strong> <em>Beam search</em> with pruning (just kill schedules that fail hand-crafted asserts). Run multiple passes that gradually select good schedules from corase to fine.</li>
<li>
<p><strong>Detail.</strong> <em>Predicting runtime</em>, which beam search minimizes, with a neural network.</p>
<ol>
<li>Schedule to feature: algorithm-specific + schedule-specific</li>
<li>Runtime prediction: design 27 runtime-related terms and have the a small model predict the coefficients of each term, use L2 loss between predicted and target <em>throughput</em></li>
<li>Training data generation: use the sytem itself, iterate between training the model and generating data with the system</li>
</ol>
</li>
<li>
<p><strong>Detail.</strong> Given more time, <em>benchmark</em> several candidates (instead of predicting runtime) and select best. Given even more time, fine-tune the neural network on the benchmark results and repeat beam search (<em>autotuning</em>).</p>
</li>
<li><strong>Thoughts.</strong>
<ul>
<li>A loop nest is a graph. Can we use graph embedding & pooling on schedules to predict runtime?</li>
<li>No comparisons with deep learning frameworks on GPUs. Maybe I have to check this myself.</li>
<li>This paper seems just to incorporate tremendous amounts of manual hand-crafted optimizations and tedious engineering. I cannot find any core novel ideas in this paper; I don’t think there’s anything new.</li>
</ul>
</li>
</ul>
<h2 id="code-peek">Code Peek</h2>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#include "Halide.h" // all of Halide
</span>
<span class="kt">int</span> <span class="nf">main</span><span class="p">()</span> <span class="p">{</span>
<span class="c1">// Symbolic definition of the algorithm 'index_sum'.</span>
<span class="n">Halide</span><span class="o">::</span><span class="n">Var</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">;</span> <span class="c1">// think of these as for loop iterators</span>
<span class="n">Halide</span><span class="o">::</span><span class="n">Func</span> <span class="n">index_sum</span><span class="p">;</span> <span class="c1">// each Func represents one pipeline stage</span>
<span class="n">index_sum</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="o">=</span> <span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="p">;</span> <span class="c1">// operation defined in an arbitrary point</span>
<span class="c1">// Manually schedule our algorithm.</span>
<span class="n">Halide</span><span class="o">::</span><span class="n">Var</span> <span class="n">x_outer</span><span class="p">,</span> <span class="n">x_inner</span><span class="p">,</span> <span class="n">y_outer</span><span class="p">,</span> <span class="n">y_inner</span><span class="p">,</span> <span class="c1">// divide loop into tiles</span>
<span class="n">tile_index</span><span class="p">,</span> <span class="c1">// fuse and parallelize</span>
<span class="n">x_inner_outer</span><span class="p">,</span> <span class="n">y_inner_outer</span><span class="p">,</span> <span class="c1">// tile each tile again</span>
<span class="n">x_vectors</span><span class="p">,</span> <span class="n">y_pairs</span><span class="p">;</span> <span class="c1">// vectorize and unroll</span>
<span class="n">index_sum</span>
<span class="c1">// tile with size (64, 64)</span>
<span class="p">.</span><span class="n">split</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">x_outer</span><span class="p">,</span> <span class="n">x_inner</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>
<span class="p">.</span><span class="n">split</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">y_outer</span><span class="p">,</span> <span class="n">y_inner</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>
<span class="p">.</span><span class="n">reorder</span><span class="p">(</span><span class="n">x_inner</span><span class="p">,</span> <span class="n">y_inner</span><span class="p">,</span> <span class="n">x_outer</span><span class="p">,</span> <span class="n">y_outer</span><span class="p">)</span>
<span class="c1">// fuse the two outer loops and parallelize</span>
<span class="p">.</span><span class="n">fuse</span><span class="p">(</span><span class="n">x_outer</span><span class="p">,</span> <span class="n">y_outer</span><span class="p">,</span> <span class="n">tile_index</span><span class="p">)</span>
<span class="p">.</span><span class="n">parallel</span><span class="p">(</span><span class="n">tile_index</span><span class="p">)</span>
<span class="c1">// tile with size (4, 2), use shorthand this time!</span>
<span class="p">.</span><span class="n">tile</span><span class="p">(</span><span class="n">x_inner</span><span class="p">,</span> <span class="n">y_inner</span><span class="p">,</span> <span class="n">x_inner_outer</span><span class="p">,</span> <span class="n">y_inner_outer</span><span class="p">,</span> <span class="n">x_vectors</span><span class="p">,</span> <span class="n">y_pairs</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
<span class="c1">// vectorize over x_vectors (vector length is 4)</span>
<span class="p">.</span><span class="n">vectorize</span><span class="p">(</span><span class="n">x_vectors</span><span class="p">)</span>
<span class="c1">// unroll loop over y_pairs (2 duplications)</span>
<span class="p">.</span><span class="n">unroll</span><span class="p">(</span><span class="n">y_pairs</span><span class="p">);</span>
<span class="c1">// Run the algorithm. Loop bounds are automatically inferred by Halide!</span>
<span class="n">Halide</span><span class="o">::</span><span class="n">Buffer</span><span class="o"><</span><span class="kt">int</span><span class="o">></span> <span class="n">result</span> <span class="o">=</span> <span class="n">index_sum</span><span class="p">.</span><span class="n">realize</span><span class="p">(</span><span class="mi">350</span><span class="p">,</span> <span class="mi">250</span><span class="p">);</span>
<span class="c1">// Print nested loop in pseudo-code.</span>
<span class="n">index_sum</span><span class="p">.</span><span class="n">print_loop_nest</span><span class="p">();</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ g++ peek.cpp -g -I ../include -L ../bin -lHalide -lpthread -ldl -o peek -std=c++11
$ LD_LIBRARY_PATH=../bin ./peek
produce index_sum:
parallel x.x_outer.tile_index:
for y.y_inner.y_inner_outer:
for x.x_inner.x_inner_outer:
unrolled y.y_inner.y_pairs in [0, 1]:
vectorized x.x_inner.x_vectors in [0, 3]:
index_sum(...) = ...
</code></pre></div></div>Jae-Won ChungIntroduction to Halide and review of several related papers. Halide aims to generate efficient domain-specific languages automatically from user-defined algorithms.The autoencoder family2019-01-31T00:00:00-05:002019-01-31T00:00:00-05:00https://jaewonchung.me/study/machine-learning/Autoencoders<p>Vanilla autoencoders(AE), denoising autoencoders(DAE), variational autoencoders(VAE), and conditional variational autoencoders(CVAE) are explained in this post. Referring to the <a href="https://jaywonchung.github.io/study/machine-learning/MLE-and-ML/">previous post</a> on Bayesian statistics may help your understanding.</p>
<h1 id="autoencoders-ae">Autoencoders (AE)</h1>
<h2 id="structure">Structure</h2>
<p><img src="/assets/images/posts/2019-01-31-AE.png" alt="Autoencoders" /></p>
<p>As seen in the above structure, autoencoders have the same input and output size. Ultimately, we want the output to be the same as the input. We penalize the difference of the input \(x\) and the output \(y\).</p>
<p>We can formulate the simplest autoencoder (with a single fully connected layer at each side) as:</p>
\[x, y \in [0,1]^d\]
\[z = h_\theta(x) = \text{sigmoid}(Wx+b) ~~~ (\theta = \{W, b\})\]
\[y = g_{\theta^\prime}(z) = \text{sigmoid}(W^\prime z+b^\prime) ~~~ (\theta = \{W^\prime, b^\prime\})\]
<p>Since we want \(x=y\), we get the following optimization problem:</p>
\[\theta^*, \theta^{\prime *} = \underset{\theta, \theta^\prime}{\text{argmin}} \frac{1}{N} \sum_{i=1}^N l(x^{(i)}, y^{(i)})\]
<p>The \(l(x,y)\) is the loss function, which calculates the difference between \(x\) and \(y\). We can use square error or cross-entropy, which are written as:</p>
\[l(x, y) = \Vert x-y \Vert^2\]
\[l(x, y) = - \sum_{k=1}^d [x_k \log(y_k) + (1-x_k)\log(1-y_k)]\]
<p>We will use cross-entropy error, which we will specially denote as \(l(x, y) = L_H(x, y)\).</p>
<h2 id="statistical-viewpoint">Statistical viewpoint</h2>
<p>We can view this loss function in terms of expectation:</p>
\[\theta^*, \theta^{\prime *} = \underset{\theta, \theta^\prime}{\text{argmin}} \mathbb{E}_{q^0(X)}[L_H(X, g_{\theta^\prime}(h_\theta(X)))]\]
<p>where \(q^0(X)\) denotes the empirical distribution associated with our \(N\) training examples.</p>
<h1 id="denoising-autoencoders-dae">Denoising Autoencoders (DAE)</h1>
<h2 id="structure-1">Structure</h2>
<p><img src="/assets/images/posts/2019-01-31-DAE.png" alt="Denoising Autoencoders" /></p>
<p>With the encoder and decoder formula the same, denoising autoencoders intentionally drop a specific portion of the pixels of the input \(x\) to zero, creating \(\tilde{x}\). Formally, we are sampling \(\tilde{x}\) from a stochastic mapping \(q_D(\tilde{x}\vert x)\). We can compute the loss between the original \(x\) and the output \(y\).</p>
<p>In formulating our objective function, we cannot use that of the vanilla autoencoder since now \(g_{\theta^\prime}(f_\theta(\tilde{x}))\) is a deterministic function of \(\tilde{x}\), not \(x\). Thus we need to take into account the connection between \(\tilde{x}\) and \(x\), which is \(q_D(\tilde{x}\vert x)\). Then we can write our optimization problem and expand it as:</p>
\[\begin{aligned}
\theta^*,\theta^{\prime *}
&= \underset{\theta, \theta^\prime}{\text{argmin}} \mathbb{E}_{q^0(X, \tilde{X})}[L_H(X, g_{\theta^\prime}(f_\theta(\tilde{X})))]\\
&= \underset{\theta, \theta^\prime}{\text{argmin}} \frac{1}{N} \sum_{x\in D} \mathbb{E}_{q_D(\tilde{x}\vert x)}[L_H(x, g_{\theta^\prime}(f_\theta(\tilde{x})))]\\
&\approx \underset{\theta, \theta^\prime}{\text{argmin}}\frac{1}{N} \sum_{x\in D} \frac{1}{L} \sum_{i=1}^L L_H(x, g_{\theta^\prime}(f_\theta(\tilde{x}_i)))
\end{aligned}\]
<p>where \(q^0(X, \tilde{X}) = q^0(X)q_D(\tilde{X}\vert X)\). Since we cannot compute the expectation in the second line, we approximate it with the Monte Carlo technique by drawing \(L\) samples and computing their mean loss.</p>
<h1 id="variational-autoencoders-vae">Variational Autoencoders (VAE)</h1>
<h2 id="structure-2">Structure</h2>
<p>VAEs have the same network structure with AEs; an encoder that calculates latent variable \(z\) and a decoder that generates output image \(y\). Also, we train both networks such that the output image and the input image are the same. However, their goal is what’s different. The goal of an autoencoder is to generate the best feature vector \(z\) from an image, whereas the goal of a variational autoencoder is to generate realistic images from the vector \(z\).</p>
<p>Also, the network structure of AEs and VAEs are not exactly the same. The encoder of an AE directly calculates the latent variable \(z\) from the input. On the other hand, the encoder of a VAE calculates the parameters of a Gaussian distribution ( \(\mu\) and \(\sigma\)), where we then sample our \(z\) from. This is true for the decoder too. AEs output the image itself, but VAE output parameters for the image pixel distribution. Let us put this more formally.</p>
<ul>
<li>
<p><strong>Encoder</strong><br />
Let a standard normal distribution \(p(z)\) be the prior distribution of latent variable \(z\).
Given an input image \(x\), we have our encoder network calculate the posterior distribution \(p(z \vert x)\). Then we sample our latent variable \(z\) from the posterior distribution.</p>
</li>
<li>
<p><strong>Decoder</strong><br />
Given a latent variable \(z\), the likelihood of our decoder outputting \(x\)(the input image) is \(p(x \vert z)\). We usually interpret this as a Multivariate Bernoulli where each pixel of the image corresponds to a dimension.</p>
</li>
</ul>
<h2 id="the-optimization-problem">The Optimization Problem</h2>
<p>We want to sample \(z\) from the posterior \(p(z \vert x)\), which can be expanded with the Bayes Rule.</p>
\[p(z \vert x) = \frac{p(x \vert z)p(z)}{p(x)}\]
<p>However \(p(x) = \int p(x \vert z ) p(z) dz\), the evidence, is intractable since we need to integrate over all possible \(z\). Thus without calculating the posterior \(p(z \vert x)\), we’ll try to approximate it with a Gaussian distribution \(q_\lambda (z \vert x)\). We call this <strong>variational inference</strong>.</p>
<p>Since we want the two distributions \(q_\lambda (z \vert x)\) and \(p(z \vert x)\) to be similar, we adopt the Kullback-Leibler Divergence and try to minimize it with respect to parameter \(\lambda\).</p>
\[\begin{aligned}
D_{KL}(q_\lambda(z \vert x) \vert \vert p(z \vert x))
&= \int_{-\infty}^{\infty} q_\lambda (z \vert x)\log \left( \frac{q_\lambda (z \vert x)}{p(z \vert x)} \right) dz\\
&=\mathbb{E}_q\left[ \log(q_\lambda (z \vert x)) \right] - \mathbb{E}_q \left[ \log (p(z \vert x)) \right] \\
&=\mathbb{E}_q\left[ \log(q_\lambda (z \vert x)) \right] - \mathbb{E}_q \left[ \log (p(z, x)) \right] + \log(p(x))\\
\end{aligned}\]
<p>The problem here is that the intractable \(p(x)\) term is still present. Now let us write the above equation in terms of \(\log(p(x))\).</p>
\[\log(p(x)) = D_{KL}(q_\lambda(z \vert x) \vert \vert p(z \vert x)) + \text{ELBO}(\lambda)\]
<p>where</p>
\[\text{ELBO}(\lambda) = \mathbb{E}_q \left[ \log (p(z, x)) \right] - \mathbb{E}_q\left[ \log(q_\lambda (z \vert x)) \right]\]
<p>KL divergences are always non-negative, and we want to minimize it with respect to \(\lambda\). This is equivalent to <strong>maximizing the ELBO</strong> with respect to \(\lambda\). The abbreviation is revealed: <strong>E</strong>vidence <strong>L</strong>ower <strong>BO</strong>und. This can also be understood as maximizing the evidence \(p(x)\) since we want to maximize the probability of getting the exact input image from the output.</p>
<h2 id="elbo">ELBO</h2>
<p>Let’s inspect the \(\text{ELBO}\) term. Since no two input images share the same latent variable \(z\), we can write \(\text{ELBO}_i (\lambda)\) for a single input image \(x_i\).</p>
\[\begin{aligned}
\text{ELBO}_i (\lambda)
&= \mathbb{E}_q \left[ \log (p(z, x_i)) \right] - \mathbb{E}_q\left[ \log(q_\lambda (z \vert x_i)) \right] \\
&= \int \log(p(z, x_i)) q_\lambda(z \vert x_i) dz - \int \log(q_\lambda(z \vert x_i))q_\lambda(z \vert x_i) dz \\
&= \int \log(p(x_i \vert z)p(z)) q_\lambda(z \vert x_i) dz - \int \log(q_\lambda(z \vert x_i))q_\lambda(z \vert x_i) dz \\
&= \int \log(p(x_i \vert z)) q_\lambda(z \vert x_i) dz - \int q_\lambda(z \vert x_i) \log\left(\frac{q_\lambda(z \vert x_i)}{p(z)}\right)dz \\
&= \mathbb{E}_q \left[ \log (p(x_i \vert z)) \right] - D_{KL}(q_\lambda(z \vert x_i) \vert \vert p(z))
\end{aligned}\]
<p>Now shifting our attention back to the network structure, our encoder network calculates the parameters of \(q_\lambda(z \vert x_i)\), and our decoder network calculates the likelihood \(p(x_i \vert z)\). Thus we can rewrite the above results so that the parameters match those of the autoencoder described above.</p>
\[\text{ELBO}_i(\phi, \theta) = \mathbb{E}_{q_\phi} \left[ \log(p_\theta(x_i \vert z)) \right] - D_{KL}(q_\phi(z \vert x_i) \vert \vert p(z))\]
<p>Negating \(\text{ELBO}_i(\phi, \theta)\), we obtain our loss function for sample \(x_i\).</p>
\[l_i(\phi, \theta) = -\text{ELBO}_i(\phi, \theta)\]
<p>Thus our optimization problem becomes</p>
\[\phi^*, \theta^* = \underset{\phi, \theta}{\text{argmin}} \sum_{i=1}^N \left[ -\mathbb{E}_{q_\phi} \left[ \log(p_\theta(x_i \vert z)) \right] + D_{KL}(q_\phi(z \vert x_i) \vert \vert p(z)) \right]\]
<h2 id="understanding-the-loss-function">Understanding the loss function</h2>
\[l_i(\phi, \theta) = -\underline{\mathbb{E}_{q_\phi} \left[ \log(p_\theta(x_i \vert z)) \right]} + \underline{D_{KL}(q_\phi(z \vert x_i) \vert \vert p(z))}\]
<p>The first underlined part (excluding the negative sign) is to be maximized. This is called the reconstruction loss: how similar the reconstructed image is to the input image. For each latent variable \(z\) we sample from the approximated posterior \(q_\phi(z \vert x_i)\), we calculate the log-likelihood of the decoder producing \(x_i\). Thus maximizing this term is equivalent to the maximum likelihood estimation.</p>
<p>The second term is the Kullback-Leibler Divergence between the approximated posterior \(q_\phi(z \vert x_i)\) and the prior \(p(z)\). This acts as a regularizer, forcing the approximated posterior to be similar to the prior distribution, which is a standard normal distribution.</p>
<p><img src="/assets/images/posts/2019-01-31-Learned-Manifold.JPG" alt="Learned Manifold" /></p>
<p>The above plots 2-dimensional latent variables of 500 test images for an AE and a VAE. As you can see, the distribution of latent variables of VAEs is close to the standard normal distribution, which is due to the regularizer. This is a virtue because, with this property, we can just easily sample a vector \(z\) from the standard normal distribution and feed it to the decoder network to generate a reasonable image. This is ideal because VAEs were intended as a generator.</p>
<h2 id="calculating-the-loss-function">Calculating the loss function</h2>
<p>To train our VAE, we should be able to calculate the loss. Let’s start with the <strong>regularizer</strong> term.</p>
<p><img src="/assets/images/posts/2019-01-31-Gaussian-Encoder.JPG" alt="Gaussian Encoder" /></p>
<p>We create our encoder network such that it calculates the mean and standard deviation of \(q_\phi(z \vert x_i)\). We then sample vector \(z\) from this Multivariate Gaussian distribution: \(z \sim \mathcal{N}(\mu, \sigma^2 I)\).</p>
<p>The KL divergence between two normal distributions is <a href="https://en.wikipedia.org/wiki/Kullback–Leibler_divergence#Multivariate_normal_distributions">known</a>. We can calculate the regularizer term as:</p>
\[D_{KL}(q_\phi(z \vert x_i) \vert \vert p(z)) = \frac{1}{2}\sum_{i=1}^J \left( \mu_{i.j}^2 + \sigma_{i,j}^2 - \log(\sigma_{i,j}^2)-1\right)\]
<p>Now let’s look at the <strong>reconstruction loss</strong> term. To calculate the log-likelihood of our image \(\log(p_\theta(x_i \vert z))\), we should choose how to model our output. We have two choices.</p>
<ol>
<li>
<p>Multivariate Bernoulli Distribution<br />
<img src="/assets/images/posts/2019-01-31-Bernoulli-Decoder.JPG" alt="Bernoulli Decoder" /></p>
<p>This is often reasonable for black and white images like those from MNIST. We binarize the training and testing images with threshold 0.5. We can implement this easily with pytorch:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">image</span> <span class="o">=</span> <span class="p">(</span><span class="n">image</span> <span class="o">>=</span> <span class="mf">0.5</span><span class="p">).</span><span class="nb">float</span><span class="p">()</span>
</code></pre></div> </div>
<p>Each output of the decoder corresponds to a single pixel of the image, denoting the probability of the pixel being white. Then we can use the Bernoulli probability mass funtion \(f(x_{i,j};p_{i,j}) = p_{i,j}^{x_{i,j}} (1-p_{i,j})^{1-x_{i,j}}\) as our likelihood.</p>
\[\begin{aligned}
\log p(x_i \vert z)
&= \sum_{j=1}^D \log(p_{i,j}^{x_{i,j}} (1-p_{i,j})^{1-x_{i,j}}) \\
&= \sum_{j=1}^D \left[x_{i,j} \log(p_{i,j}) + (1-x_{i,j})\log(1-p_{i,j}) \right]
\end{aligned}\]
<p>This is equivalent to the cross-entropy loss.</p>
</li>
<li>
<p>Multivariate Gaussian Distribution<br />
<img src="/assets/images/posts/2019-01-31-Gaussian-Decoder.JPG" alt="Gaussian Decoder" /></p>
<p>The probability density function of a Gaussian distribution is as follows.</p>
\[f(x_{i,j};\mu_{i,j}, \sigma_{i,j}) = \frac{1}{\sqrt{2\pi\sigma_{i,j}^2}}e^{-\frac{(x_{i,j}-\mu_{i,j})^2}{2\sigma_{i,j}^2}}\]
<p>Using this in our likelihood,</p>
\[\log p(x_i \vert z) = -\sum_{j=1}^D \left[ \frac{1}{2}\log(\sigma_{i,j}^2)+\frac{(x_{i,j}-\mu_{i,j})^2}{2\sigma_{i,j}^2} \right]\]
<p>Notice that if we fix \(\sigma_{i,j} = 1\), we get the square error.</p>
</li>
</ol>
<p>Now we’ve calculated the posterior \(p_\theta(x_i \vert z)\), we can look at the whole reconstruction loss term. Unfortunately, the expectation is difficult to compute since it takes into account every possible \(z\). So we use the Monte Carlo approximation of expectation by sampling \(L\) \(z_l\)’s from \(q_\phi(z \vert x_i)\) and take their mean log likelihood.</p>
\[\mathbb{E}_{q_\phi} \left[ \log p_\theta(x_i \vert z) \right] \approx \frac{1}{L} \sum_{l=1}^L \log p_\theta(x_i \vert z_l )\]
<p>For convenience, we use \(L = 1\) in implementation.</p>
<h1 id="conditional-variational-autoencoders-cvae">Conditional Variational Autoencoders (CVAE)</h1>
<h2 id="structure-3">Structure</h2>
<p>The CVAE has the same structure and loss function as the VAE, but the input data is different. Notice that in VAEs, we never used the labels of our training data. If we have labels, why don’t we use them?</p>
<p><img src="/assets/images/posts/2019-01-31-CVAE.png" alt="Conditional Variational Autoencoders" /></p>
<p>Now in conditional variational autoencoders, we concatenate the onehot labels with the input images, and also with the latent variables. Everything else is the same.</p>
<h2 id="implications">Implications</h2>
<p>What do we get by doing this? One good thing about this is that the latent variable no longer needs to encode which label the input is. It only needs to encode its styles, or the <strong>class-invariant features</strong> of that image.</p>
<p>Then, we can concatenate any onehot vector to generate an image of the intended class with the specific style encoded by the latent variable.</p>
<p>For more images on generation, check out <a href="https://github.com/jaywonchung/Learning-ML/tree/master/Implementations/Conditional-Variational-Autoencoder">my repository</a>’s README file.</p>
<h1 id="acknowledgements">Acknowledgements</h1>
<ul>
<li>
<p>Images in this post were borrowed from the <a href="https://www.slideshare.net/NaverEngineering/ss-96581209">presentation by Hwalsuk Lee</a>.</p>
</li>
<li>
<p>I’ve implemented everything discussed here. Check out <a href="https://github.com/jaywonchung/Learning-ML">my GitHub repository</a>.</p>
</li>
</ul>Jae-Won ChungAE, DAE, VAE, and CVAE explained. The previous post on Bayesian statistics may help your understanding.Bayesian Statistics, Maximum Likelihood Estimation, and Machine Learning2019-01-29T00:00:00-05:002019-01-29T00:00:00-05:00https://jaewonchung.me/study/machine-learning/MLE-and-ML<h1 id="resources">Resources</h1>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Prior_probability">Wikipedia: Prior Probability</a></li>
<li><a href="https://en.wikipedia.org/wiki/Posterior_probability">Wikipedia: Posterior Probability</a></li>
<li><a href="https://en.wikipedia.org/wiki/Maximum_likelihood_estimation">Wikipedia: Maximum Likelihood Estimation</a></li>
<li><a href="https://www.youtube.com/watch?v=o_peo6U7IRM">Youtube: 오토인코더의 모든 것 1/3</a></li>
</ul>
<h1 id="prior-probability">Prior probability</h1>
<p>The prior probability distribution of an uncertain quantity is the probability distribution about that quantity <strong>before</strong> some evidence is taken into account. This is often expressed as \(p(\theta)\).</p>
<h1 id="posterior-probability">Posterior probability</h1>
<p>The posterior probability of a random event is the conditional probability that is assigned <strong>after</strong> relevant evidence is taken into account. This is often expressed as \(p(\theta | X)\). The prior and posterior probabilities are related by the Bayes’ Theorem as follows:</p>
\[p(\theta | x) = \frac{p(x|\theta)p(\theta)}{p(x)}\]
<h1 id="maximum-likelihood-estimation-mle">Maximum Likelihood Estimation (MLE)</h1>
<p>MLE is a method of estimating the parameters of a statistical model, given observations. Intuitively, we are trying to find the model parameters that make the observed data most probable. This is done by finding the parameters that maximizes the likelihood function \(\mathcal{L}(\theta;x)\). When we are dealing with discrete random variables, the likelihood function is the probability. On the other hand, when we are dealing with continuous random variables, the likelihood function is the value of the probability distribution function.</p>
<p>We can formulate the MLE problem as follows:</p>
\[\theta^* \in \{\underset{\theta}{\text{argmax}} \mathcal{L}(\theta;x)\}\]
<p>where \(\theta\) is the model parameters and \(x\) is the observed data.</p>
<p>We often use the average log-likelihood function</p>
\[\hat{\mathcal{l}}(\theta;x) = \frac{1}{n} \log \mathcal{L}(\theta;x)\]
<p>since it has preferable qualities. One of this is illustrated later in this document.</p>
<h2 id="machine-learning-in-the-mle-perspective">Machine Learning in the MLE perspective</h2>
<p><img src="https://raw.githubusercontent.com/jaywonchung/jaywonchung.github.io/master/assets/images/posts/2019-01-29-ML-model-traditional.png" alt="Tradidional machine learning models" /></p>
<p>A traditional machine learning model for classification is visualized as the above: we receive an input image \(x\) and our model calculates \(f_\theta (x)\), which is a vector denoting the probability for each class. Then based on our label, we calculate the loss function, which is then optimized using gradient descent. Now, let us view this in a maximum likelihood perspective.</p>
<p><img src="https://raw.githubusercontent.com/jaywonchung/jaywonchung.github.io/master/assets/images/posts/2019-01-29-ML-model-MLE.png" alt="Machine learning models in a MLE perspective" /></p>
<p>Now, when we create an ML model, we choose a statistical model that our output may follow. Then, our ML model function calculates the parameters of that statistical model. For example, let us assume that our output \(y\) is one dimensional and has a Gaussian distribution. Then we set \(f_\theta(x)\) to a two-dimensional vector and interpret it as</p>
\[f_\theta(x) =\begin{bmatrix}\mu\\\sigma\end{bmatrix}\]
<p>Thus for each input \(x\) we obtain a Gaussian distribution for \(y\). Using negative log-likelihood, our optimization problem is the following:</p>
\[\theta^* = \underset{\theta}{\text{argmin}}[-\log p(y|f_\theta(x))]\]
<p>If we assume that our inputs are independent and identically distributed (i.i.d), we can obtain the following:</p>
\[p(y|f_\theta(x)) = \prod_i p(y_i|f_\theta(x_i))\]
<p>Rewriting our optimization problem:</p>
\[\theta^* = \underset{\theta}{\text{argmin}}[-\sum_i\log p(y_i|f_\theta(x_i))]\]
<p>When we perform inference from our model, we no longer get determined outputs as we did in traditional machine learning models. We now get a distribution of \(y_\text{new}\),</p>
\[y_\text{new} \sim f_{\theta^*}(x_\text{new})\]
<p>where we should sample a single \(y_\text{new}\).</p>
<h2 id="loss-functions-in-the-mle-perspective">Loss Functions in the MLE perspective</h2>
<p>Two famous loss functions, mean square error and cross-entropy error, can be derived using the MLE perspective.</p>
<p><img src="https://raw.githubusercontent.com/jaywonchung/jaywonchung.github.io/master/assets/images/posts/2019-01-29-Loss-functions-MLE.png" alt="Loss function derived" />
(<a href="https://www.slideshare.net/NaverEngineering/ss-96581209">https://www.slideshare.net/NaverEngineering/ss-96581209</a>)</p>Jae-Won ChungSome basic content I encountered while studying machine learning. A very brief explanation of prior probabilities, posterior probabilities, maximum likelihood estimation, and how they provide a new viewpoint for machine learning models.[Review] XNOR-Nets: ImageNet Classification Using Binary Convolutional Neural Networks2019-01-18T00:00:00-05:002019-01-18T00:00:00-05:00https://jaewonchung.me/read/papers/XNOR-Nets<h1 id="resources">Resources</h1>
<ul>
<li><a href="https://arxiv.org/abs/1603.05279">arXiv</a></li>
<li><a href="http://allenai.org/plato/xnornet">Official XNOR implementation of AlexNet</a></li>
</ul>
<h1 id="abstractintroduction">Abstract/Introduction</h1>
<p>The two models presented:</p>
<blockquote>
<p>In Binary-Weight-Networks, the (convolution) filters are approximated with binary values resulting in 32 x memory saving.</p>
</blockquote>
<blockquote>
<p>In XNOR-Networks, both the filters and the input to convolutional layers are binary. … This results in 58 x faster convolutional operations…</p>
</blockquote>
<p>Implications:</p>
<blockquote>
<p>XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time.</p>
</blockquote>
<h1 id="binary-convolutional-neural-networks">Binary Convolutional Neural Networks</h1>
<p>For future discussions we use the following mathematical notation for a CNN layer:</p>
<p>\(\mathcal{I}_{l(l=1,...,L)} = \mathbf{I}\in \mathbb{R} ^{c \times w_{\text{in}} \times h_{\text{in}}}\)<br />
\(\mathcal{W}_{lk(k=1,...,K^l)}=\mathbf{W} \in \mathbb{R} ^{c \times w \times h}\)<br />
\(\ast\text{ : convolution}\)<br />
\(\oplus\text{ : convolution without multiplication}\)<br />
\(\otimes \text{ : convolution with XNOR and bitcount}\)<br />
\(\odot \text{ : elementwise multiplication}\)</p>
<h2 id="convolution-with-binary-weights">Convolution with binary weights</h2>
<p>In binary convolutional networks, we estimate the convolution filter weight as \(\mathbf{W}\approx\alpha \mathbf{B}\), where \(\alpha\) is a scalar scaling factor and \(\mathbf{B} \in \{+1, -1\} ^{c \times w \times h}\). Hence, we estimate the convolution operation as follows:</p>
\[\mathbf{I} \ast \mathbf{W}\approx (\mathbf{I}\oplus \mathbf{B})\alpha\]
<p>To find an optimal estimation for \(\mathbf{W}\approx\alpha \mathbf{B}\) we solve the following problem:</p>
\[J(\mathbf{B},\alpha)=\Vert \mathbf{W}-\alpha \mathbf{B}\Vert^2\]
\[\alpha ^*,\mathbf{B}^* =\underset{\alpha, \mathbf{B}}{\text{argmin}}J(\mathbf{B},\alpha)\]
<p>Going straight to the answer:</p>
\[\alpha^* = \frac{1}{n}\Vert \mathbf{W}\Vert_{l1}\]
\[\mathbf{B}^*=\text{sign}(\mathbf{W})\]
<h2 id="training">Training</h2>
<p>The gradients are computed as follows:</p>
\[\frac{\partial \text{sign}}{\partial r}=r \text{1}_{\vert r \vert \le1}\]
\[\frac{\partial L}{\partial \mathbf{W}_i}=\frac{\partial L}{\partial \widetilde{\mathbf{W}_i}}\left(\frac{1}{n} + \frac{\partial \text{sign}}{\partial \mathbf{W}_i}\alpha \right)\]
<p>where \(\widetilde{\mathbf{W}}=\alpha \mathbf{B}\), the estimated value of \(\mathbf{W}\).</p>
<p>The gradient values are kepted as real values; they cannot be binarized due to excessive information loss. Optimization is done by either SGD with momentum or ADAM.</p>
<h1 id="xnor-networks">XNOR-Networks</h1>
<p>Convolutions are a set of dot products between a submatrix of the input and a filter. Thus we attempt to express dot products in terms of binary operations.</p>
<h2 id="binary-dot-product">Binary Dot Product</h2>
<p>For vectors \(\mathbf{X}, \mathbf{W} \in \mathbb{R}^n\) and \(\mathbf{H}, \mathbf{B} \in \{+1,-1\}^n\), we approximate the dot product between \(\mathbf{X}\) and \(\mathbf{W}\) as</p>
\[\mathbf{X}^\top \mathbf{W} \approx \beta \mathbf{H}^\top \alpha \mathbf{B}\]
<p>We solve the following optimization problem:</p>
\[\alpha^*, \mathbf{H}^*, \beta^*, \mathbf{B}^*=\underset{\alpha, \mathbf{H}, \beta, \mathbf{B}}{\text{argmin}} \Vert \mathbf{X} \odot \mathbf{W} - \beta \alpha \mathbf{H} \odot \mathbf{B} \Vert\]
<p>Going straight to the answer:</p>
\[\alpha^* \beta^*=\left(\frac{1}{n}\Vert \mathbf{X} \Vert_{l1}\right)\left(\frac{1}{n}\Vert \mathbf{W} \Vert_{l1}\right)\]
\[\mathbf{H}^* \odot \mathbf{B}^*=\text{sign}(\mathbf{X}) \odot \text{sign}(\mathbf{W})\]
<h2 id="convolution-with-binary-inputs-and-weights">Convolution with binary inputs and weights</h2>
<p>Calculating \(\alpha^* \beta^*\) for every submatrix in input tensor \(\mathbf{I}\) involves a large number of redundant computations. To overcome this inefficiency we first calculate</p>
\[\mathbf{A}=\frac{\sum{\vert \mathbf{I}_{:,:,i} \vert}}{c}\]
<p>which is an average over absolute values of \(\mathbf{I}\) along its channel. Then, we convolve \(\mathbf{A}\) with a 2D filter \(\mathbf{k} \in \mathbb{R}^{w \times h}\) where \(\forall ij \ \mathbf{k}_{ij}=\frac{1}{w \times h}\):</p>
\[\mathbf{K}=\mathbf{A} \ast \mathbf{k}\]
<p>This \(\mathbf{K}\) acts as a global \(\beta\) spatially across the submatrices. Now we can estimate our convolution with binary inputs and weights as:</p>
\[\mathbf{I} \ast \mathbf{W} \approx (\text{sign}(\mathbf{I}) \otimes \text{sign}(\mathbf{W}) \odot \mathbf{K} \alpha\]
<h2 id="training-1">Training</h2>
<p>A CNN block in XNOR-Net has the following structure:</p>
<p><code class="language-plaintext highlighter-rouge">[Binary Normalization] - [Binary Activation] - [Binary Convolution] - [Pool]</code></p>
<p>The BinNorm layer normalizes the input batch by its mean and variance. The BinActiv layer calculates \(\mathbf{K}\) and \(\text{sign}(\mathbf{I})\). We may insert a non-linear activation function between the BinConv layer and the Pool layer.</p>
<h1 id="experiments">Experiments</h1>
<p>The paper implemented the AlexNet, the Residual Net, and a GoogLenet variant(Darknet) with binary convolutions. This resulted in a few percent point of accuracy decrease, but overall worked fairly well. Refer to the paper for details.</p>
<h1 id="discussion">Discussion</h1>
<p>Binary convolutions were not at all entirely binary; the gradients had to be real values. It would be fascinating if even the gradient is binarizable.</p>Jae-Won ChungA model that binarizes both the input and convolution filters, offering the possibility of running SOTA networks on CPUs.공학수학 1,2 필기 공유2018-10-31T00:00:00-04:002018-10-31T00:00:00-04:00https://jaewonchung.me/study/lectures/EM-notes<p>Written with iPad Pro and Apple Pencil during and after lectures.
Feel free to share freely as long as you keep the email address in the lower right corner.
No commercial use.</p>
<p>필기는 아이패드 프로와 애플펜슬로 작성되었으며, 제가 실제로 수업을 들으며, 혹은 들은 후에 작성한 내용입니다. 필기 우측 하단 이메일주소만 남기시면 자유롭게 공유하셔도 좋습니다. 다만 상업적 이용은 금하겠습니다.</p>
<p>Google Drive: <a href="https://drive.google.com/open?id=1fJDoA_5gIAPB1BeIXKLVpGXyFZlNS0BA">https://drive.google.com/open?id=1fJDoA_5gIAPB1BeIXKLVpGXyFZlNS0BA</a></p>
<p><img src="/assets/images/posts/2018-10-31-EM1.png" alt="example page 1" /></p>
<p><img src="/assets/images/posts/2018-10-31-EM2.png" alt="example page 2" /></p>
<p><img src="/assets/images/posts/2018-10-31-EM3.png" alt="example page 3" /></p>
<p><img src="/assets/images/posts/2018-10-31-EM4.png" alt="example page 4" /></p>
<p><img src="/assets/images/posts/2018-10-31-EM5.png" alt="example page 5" /></p>Jae-Won ChungGiving out handwritten notes for Engineering Mathematics 1 and 2