<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>regression on Casual Inference</title>
    <link>https://www.casualinf.com/tags/regression/</link>
    <description>Recent content in regression on Casual Inference</description>
    <generator>Hugo -- gohugo.io</generator>
    <lastBuildDate>Thu, 07 Jan 2021 00:00:00 +0000</lastBuildDate><atom:link href="https://www.casualinf.com/tags/regression/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Linear Regression on Coffee Rating Data</title>
      <link>https://www.casualinf.com/post/2021-01-07-linear-regression-on-coffee-rating-data/</link>
      <pubDate>Thu, 07 Jan 2021 00:00:00 +0000</pubDate>
      
      <guid>https://www.casualinf.com/post/2021-01-07-linear-regression-on-coffee-rating-data/</guid>
      <description>While I am reading Elements of Statistical Learning, I figured it would be a good idea to try to use the machine learning methods introduced in the book. I just finished a chapter on linear regression, and learned more about linear regression and the penalized methods (Ridge and Lasso). Since there is an abundant resource available online, it would be redundant to get into the details. I’ll quickly go over Ordinary Least Squares, Ridge, and Lasso regression, and quickly show an application of those methods in R.</description>
    </item>
    
    <item>
      <title>Two-Dimension LDA</title>
      <link>https://www.casualinf.com/post/two-dimension-lda/</link>
      <pubDate>Mon, 04 Feb 2019 00:00:00 +0000</pubDate>
      
      <guid>https://www.casualinf.com/post/two-dimension-lda/</guid>
      <description>LDA, Linear Discriminant Analysis, is a classification method and a dimension reducion technique. I’ll focus more on classification. LDA calculates a linear discriminant function (which arises from assuming Gaussian distribution) for each class, and chooses a class that maximizes such function. The linear discriminant function therefore dictates a linear decision boundary for choosing a class. The decision boundary should be linear in the feature space. Discriminant analysis itself isn’t inherently linear.</description>
    </item>
    
  </channel>
</rss>
