Monday, June 24, 2013

Week of 6/17 - 6/21, 2013

This week was dedicated to fitting sample luminosity functions. To do this I plotted a sample set of data (number of galaxies vs luminosities). I used the Schecter Luminosity Function:

This function can be fitted using chi-squared. Many fitting problems in python can be expressed as least-squares problems. Chi-squared looks like this:


The numerator represents the physical difference in distance of a data point to the best-fit curve. It is squared to account for points above and below the curve. The denominator, written as sigma squared, represents the poisson error of the point. The whole expression is summed to account for all data points.  In a perfect world, chi-squared should be as small a value as possible. The lowest value I was able to achieve was 50.5. This sounds high but it is a great improvement over my original value (6545.3) that I had before debugging my program.

In this exercise, I plotted two sets of sample data and fitted both using chi-squared. The programs for each are nearly identical except for the file that the program reads in and the number of bins used.
Here is my program:



The program turns up these graphs:







In addition to creating this program I researched the Hydrogen alpha line. The emission line occurs when a hydrogen electron falls from the 3rd to 2nd orbital. Since hydrogen is the most abundant element in the universe, the hydrogen alpha line is the brightest emission line in interstellar space.

Next week I will work more with fitting our luminosity functions as well as construct a website that will display our team's work thus far during Astro Research at Siena. On Wednesday June, 26th, we will travel to Union College to present our work and learn more about undergraduate research in the local area.

Friday, June 14, 2013

Week of 6/10 - 6/14, 2013

This week began with the continued addition of summaries towards our group document. The document holds summaries from Debbie Johnson, Brendan Gallagher, and myself on various topics in astronomy. For example, one paper I reviewed was by Sara Ellison concerning merging galaxies and ultra-luminous infrared galaxies (ULIRGs). ULIRGs are galaxies are galaxies with greater than or equal to 10^12 solar luminosities (these galaxies make our sun look very faint in comparison).

I also learned more about The Galaxy Luminosity Function. This function describes relative number of galaxies at certain luminosities. Using sample data, I was able to plot the Luminosity Function.

Schecter Luminosity Function: (a way to fit the Luminosity Function)
Sample set of data (number of galaxies vs. luminosities):




























































Friday, June 7, 2013

Week of 6/3 - 6/7, 2013

This week brought on a whole new set of skills and tasks for my Astro Research. For starters, I was given a specific set of instructions to guide me for the rest of the summer. That list looks like this:


    1. The Local Field (WISE)
      1. overview of the survey
        1. WISE
        2. NASA-Sloan Atlas
      2. tasks
        1. identify environment of galaxies - are they in field/group/cluster
        2. write a program that will fit a luminosity function to data - there should be some existing python programs that we can use, but this will require some research
        3. match the NASA-Sloan Atlas with the galaxy zoo catalogs


      1. plots
        1. number of galaxies versus Mg
        2. number of galaxies versus g-band luminosity
        3. number of galaxies per Mpc^3 versus g-band luminosity
        4. fit a schechter luminosity function to the above plot
        5. now repeat using 22-micron luminosity
        6. plot number of galaxies per Mpc^3 versus 22-micron luminosity for elliptical, spiral, and other (this requires matching the NASA-Sloan Atlas to the Galaxy Zoo catalog).
      2. issues to think about
        1. completeness - how well do we recover sources of a given flux, and how to we account for this when fitting a luminosity function?  This is going to be a little complicated because the NASA-Sloan Atlas is a mix of many different surveys.  For simplicity, we might first limit ourselves to using only the galaxies that are in the sloan digital sky survey.
        2. how do we convert from 22-micron luminosity to total infrared luminosity, and why do we want to do this?
        3. how sensitive is WISE compared to the Local Cluster Survey 24-micron observations?
      3. compare with previous results and with results from local cluster survey




Just as promised last week, I have begun using actual data from WISE to construct plots based on the above guidelines. Struggling through it, I have completed the first two plots and cleaned them up to make them appear more professional. Here are my plots of number of galaxies versus g-band luminosity:







You may be wondering why I have created two histograms that use identical data. I wanted to explain what "bins" are and how they can be used for coding. The first histogram is constructed using 500 bins. That means that the data (luminosities observed in the g-band) is split into 500 separate containers (bins) . This explains why the maximum value on the y-axis (number of galaxies) on this histogram is much lower (5000 versus 25000) than on the lower histogram. On the lower plot,  the data is put into just 100 bins. Because there are less bins to put data into, more galaxies wil be put into the same bins. Hence, the maximum number of galaxies (y-axis) in a bin on this histogram will be much higher. In other words, there will be more galaxies per bin if there are less bins to put galaxies into.

In addition to building code for our data, I was busy summarizing papers written by astronomers working with the same kind of data that I am. I learned new information about luminous infrared galaxies (LIRGs) as it pertains to star formation rate (SFR), environment of a galaxy (field, group, or cluster), the "flatness problem," critical density, and what the virial radius means. Understanding the science definitely helps me work through all of the scientific terms and units thrown at me during research. 

As far as UNIX goes, I now have a better understanding of git commands and "jobs" commands. With git, I am more familiar with checking statuses, adding files, committing them, and finally pushing them to the cloud repository on github. I also learned how to work with the multiple jobs that can all be running at the same time and how to terminate them. Having less jobs running makes it easier to navigate through the important ones. 

Next week I plan to finish m article summaries and push forward in creating more plots. 

--Mike Englert

P.S. definition: foobar: a universal variable understood to represent whatever is being discussed