Package ComboCode :: Package cc :: Package ivs :: Package sed :: Module builder
[hide private]
[frames] | no frames]

Source Code for Module ComboCode.cc.ivs.sed.builder

   1  # -*- coding: utf-8 -*- 
   2  """ 
   3  SED builder program. 
   4   
   5  To construct an SED, use the SED class. The functions defined in this module 
   6  are mainly convenience functions specifically for that class, but can be used 
   7  outside of the SED class if you know what you're doing. 
   8   
   9  Table of contents: 
  10   
  11      1. Retrieving and plotting photometry of a target 
  12      2. Where/what is my target? 
  13      3. SED fitting using a grid based approach 
  14          - Saving SED fits 
  15          - Loading SED fits 
  16      4. Accessing the best fitting full SED model 
  17      5. Radii, distances and luminosities 
  18          - Relations between quantities 
  19          - Parallaxes 
  20          - Seismic constraints 
  21          - Reddening constraints 
  22          - Evolutionary constraints 
  23   
  24  Section 1. Retrieving and plotting photometry of a target 
  25  ========================================================= 
  26   
  27  >>> mysed = SED('HD180642') 
  28  >>> mysed.get_photometry() 
  29  >>> mysed.plot_data() 
  30   
  31  and call Pylab's C{show} function to show to the screen: 
  32   
  33  ]]include figure]]ivs_sed_builder_example_photometry.png] 
  34   
  35  Catch IndexErrors and TypeErrors in case no photometry is found. 
  36   
  37  You can give a B{search radius} to C{get_photometry} via the keyword C{radius}. 
  38  The default value is 10 arcseconds for stars dimmer than 6th magnitude, and 60 
  39  arcseconds for brighter stars. The best value of course depends on the density 
  40  of the field. 
  41   
  42  >>> mysed.get_photometry(radius=5.) 
  43   
  44  If your star's name is not recognised by any catalog, you can give coordinates 
  45  to look for photometry. In that case, the ID of the star given in the C{SED} 
  46  command will not be used to search photometry (only to save the phot file): 
  47   
  48  >>> mysed.get_photometry(ra=289.31167983,dec=1.05941685) 
  49   
  50  Note that C{ra} and C{dec} are given in B{degrees}. 
  51   
  52  You best B{switch on the logger} (see L{cc.ivs.aux.loggers.get_basic_logger}) to see the progress: 
  53  sometimes, access to catalogs can take a long time (the GATOR sources are 
  54  typically slow). If one of the C{gator}, C{vizier} or C{gcpd} is impossibly slow 
  55  or the site is down, you can B{include/exclude these sources} via the keywords 
  56  C{include} or C{exclude}, which take a list of strings (choose from C{gator}, 
  57  C{vizier} and/or C{gcpd}). For ViZieR, there is an extra option to change to 
  58  another mirror site via 
  59   
  60  >>> vizier.change_mirror() 
  61   
  62  The L{vizier.change_mirror} function cycles through all the mirrors continuously, 
  63  so sooner or later you will end up with the default one and repeat the cycle. 
  64   
  65  >>> mysed.get_photometry(exclude=['gator']) 
  66   
  67  The results will be written to the file B{HD180642.phot}. An example content is:: 
  68   
  69    #  meas    e_meas flag unit photband          source                        _r   _RAJ2000    _DEJ2000   cwave       cmeas     e_cmeas cunit       color include 
  70    #float64   float64 |S20 |S30 |S30              |S50                     float64    float64     float64 float64     float64     float64 |S50         bool    bool 
  71      7.823      0.02 nan  mag  WISE.W3           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05  123337 4.83862e-17 8.91306e-19 erg/s/cm2/AA     0       1 
  72      7.744     0.179 nan  mag  WISE.W4           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05  222532 4.06562e-18 6.70278e-19 erg/s/cm2/AA     0       1 
  73       7.77     0.023 nan  mag  WISE.W1           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05 33791.9   6.378e-15  1.3511e-16 erg/s/cm2/AA     0       1 
  74      7.803      0.02 nan  mag  WISE.W2           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05   46293 1.82691e-15 3.36529e-17 erg/s/cm2/AA     0       1 
  75      8.505     0.016 nan  mag  TYCHO2.BT         I/259/tyc2                 0.042   7.17e-06    1.15e-06  4204.4 2.76882e-12 4.08029e-14 erg/s/cm2/AA     0       1 
  76      8.296     0.013 nan  mag  TYCHO2.VT         I/259/tyc2                 0.042   7.17e-06    1.15e-06 5321.86 1.93604e-12 2.31811e-14 erg/s/cm2/AA     0       1 
  77       8.27       nan nan  mag  JOHNSON.V         II/168/ubvmeans             0.01    1.7e-07    3.15e-06 5504.67 1.80578e-12 1.80578e-13 erg/s/cm2/AA     0       1 
  78       0.22       nan nan  mag  JOHNSON.B-V       II/168/ubvmeans             0.01    1.7e-07    3.15e-06     nan     1.40749    0.140749 flux_ratio      1       0 
  79      -0.66       nan nan  mag  JOHNSON.U-B       II/168/ubvmeans             0.01    1.7e-07    3.15e-06     nan     1.21491    0.121491 flux_ratio      1       0 
  80       8.49       nan nan  mag  JOHNSON.B         II/168/ubvmeans             0.01    1.7e-07    3.15e-06 4448.06 2.54162e-12 2.54162e-13 erg/s/cm2/AA     0       1 
  81       7.83       nan nan  mag  JOHNSON.U         II/168/ubvmeans             0.01    1.7e-07    3.15e-06 3641.75 3.08783e-12 3.08783e-13 erg/s/cm2/AA     0       1 
  82      2.601       nan nan  mag  STROMGREN.HBN-HBW J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan     1.66181    0.166181 flux_ratio      1       0 
  83     -0.043       nan nan  mag  STROMGREN.M1      J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan    0.961281   0.0961281 flux_ratio      1       0 
  84      8.221       nan nan  mag  STROMGREN.Y       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685 5477.32 1.88222e-12 1.88222e-13 erg/s/cm2/AA     0       1 
  85      0.009       nan nan  mag  STROMGREN.C1      J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan     0.93125    0.093125 flux_ratio      1       0 
  86      0.238       nan nan  mag  STROMGREN.B-Y     J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan     1.28058    0.128058 flux_ratio      1       0 
  87      8.459       nan nan  mag  STROMGREN.B       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685  4671.2 2.41033e-12 2.41033e-13 erg/s/cm2/AA     0       1 
  88      8.654       nan nan  mag  STROMGREN.V       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685 4108.07 2.96712e-12 2.96712e-13 erg/s/cm2/AA     0       1 
  89      8.858       nan nan  mag  STROMGREN.U       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685 3462.92 3.40141e-12 3.40141e-13 erg/s/cm2/AA     0       1 
  90       7.82      0.01 nan  mag  JOHNSON.J         J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06 12487.8 2.36496e-13 2.36496e-15 erg/s/cm2/AA     0       1 
  91       7.79      0.01 nan  mag  JOHNSON.K         J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06 21951.2 3.24868e-14 3.24868e-16 erg/s/cm2/AA     0       1 
  92       7.83      0.04 nan  mag  JOHNSON.H         J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06 16464.4 8.64659e-14 3.18552e-15 erg/s/cm2/AA     0       1 
  93     8.3451    0.0065 nan  mag  HIPPARCOS.HP      I/239/hip_main             0.036   2.17e-06    1.15e-06 5275.11 1.91003e-12 1.91003e-14 erg/s/cm2/AA     0       1 
  94      8.525     0.011 nan  mag  TYCHO2.BT         I/239/hip_main             0.036   2.17e-06    1.15e-06  4204.4 2.71829e-12   2.754e-14 erg/s/cm2/AA     0       1 
  95      8.309     0.012 nan  mag  TYCHO2.VT         I/239/hip_main             0.036   2.17e-06    1.15e-06 5321.86   1.913e-12 2.11433e-14 erg/s/cm2/AA     0       1 
  96       8.02     0.057 nan  mag  COUSINS.I         II/271A/patch2               0.2 -4.983e-05   2.315e-05 7884.05 7.51152e-13 3.94347e-14 erg/s/cm2/AA     0       1 
  97      8.287     0.056 nan  mag  JOHNSON.V         II/271A/patch2               0.2 -4.983e-05   2.315e-05 5504.67 1.77773e-12 9.16914e-14 erg/s/cm2/AA     0       1 
  98       8.47       nan nan  mag  USNOB1.B1         I/284/out                  0.026   7.17e-06     1.5e-07 4448.06 2.57935e-12 7.73805e-13 erg/s/cm2/AA     0       1 
  99       8.19       nan nan  mag  USNOB1.R1         I/284/out                  0.026   7.17e-06     1.5e-07 6939.52 1.02601e-12 3.07803e-13 erg/s/cm2/AA     0       1 
 100      8.491     0.012 nan  mag  JOHNSON.B         I/280B/ascc                0.036   2.17e-06    2.15e-06 4448.06 2.53928e-12 2.80651e-14 erg/s/cm2/AA     0       1 
 101      8.274     0.013 nan  mag  JOHNSON.V         I/280B/ascc                0.036   2.17e-06    2.15e-06 5504.67 1.79914e-12 2.15419e-14 erg/s/cm2/AA     0       1 
 102      7.816     0.023 nan  mag  2MASS.J           II/246/out                 0.118 -2.783e-05   1.715e-05 12412.1 2.28049e-13 4.83093e-15 erg/s/cm2/AA     0       1 
 103      7.792     0.021 nan  mag  2MASS.KS          II/246/out                 0.118 -2.783e-05   1.715e-05 21909.2 3.26974e-14 6.32423e-16 erg/s/cm2/AA     0       1 
 104      7.825     0.042 nan  mag  2MASS.H           II/246/out                 0.118 -2.783e-05   1.715e-05 16497.1 8.48652e-14 3.28288e-15 erg/s/cm2/AA     0       1 
 105      8.272     0.017 nan  mag  GENEVA.V          GCPD                         nan        nan         nan  5482.6 1.88047e-12 2.94435e-14 erg/s/cm2/AA     0       1 
 106        1.8     0.004 nan  mag  GENEVA.G-B        GCPD                         nan        nan         nan     nan    0.669837  0.00669837 flux_ratio      1       0 
 107      1.384     0.004 nan  mag  GENEVA.V1-B       GCPD                         nan        nan         nan     nan    0.749504  0.00749504 flux_ratio      1       0 
 108       0.85     0.004 nan  mag  GENEVA.B1-B       GCPD                         nan        nan         nan     nan     1.05773   0.0105773 flux_ratio      1       0 
 109      1.517     0.004 nan  mag  GENEVA.B2-B       GCPD                         nan        nan         nan     nan    0.946289  0.00946289 flux_ratio      1       0 
 110      0.668     0.004 nan  mag  GENEVA.V-B        GCPD                         nan        nan         nan     nan    0.726008  0.00726008 flux_ratio      1       0 
 111      0.599     0.004 nan  mag  GENEVA.U-B        GCPD                         nan        nan         nan     nan     1.13913   0.0113913 flux_ratio      1       0 
 112      7.604 0.0174642 nan  mag  GENEVA.B          GCPD                         nan        nan         nan 4200.85 2.59014e-12 4.16629e-14 erg/s/cm2/AA     0       1 
 113      9.404 0.0179165 nan  mag  GENEVA.G          GCPD                         nan        nan         nan 5765.89 1.73497e-12   2.863e-14 erg/s/cm2/AA     0       1 
 114      8.988 0.0179165 nan  mag  GENEVA.V1         GCPD                         nan        nan         nan 5395.63 1.94132e-12 3.20351e-14 erg/s/cm2/AA     0       1 
 115      8.454 0.0179165 nan  mag  GENEVA.B1         GCPD                         nan        nan         nan 4003.78 2.73968e-12 4.52092e-14 erg/s/cm2/AA     0       1 
 116      9.121 0.0179165 nan  mag  GENEVA.B2         GCPD                         nan        nan         nan 4477.56 2.45102e-12  4.0446e-14 erg/s/cm2/AA     0       1 
 117      8.203 0.0179165 nan  mag  GENEVA.U          GCPD                         nan        nan         nan 3421.62 2.95052e-12 4.86885e-14 erg/s/cm2/AA     0       1 
 118       8.27       nan nan  mag  JOHNSON.V         GCPD                         nan        nan         nan 5504.67 1.80578e-12 1.80578e-13 erg/s/cm2/AA     0       1 
 119       0.22       nan nan  mag  JOHNSON.B-V       GCPD                         nan        nan         nan     nan     1.40749    0.140749 flux_ratio      1       0 
 120      -0.66       nan nan  mag  JOHNSON.U-B       GCPD                         nan        nan         nan     nan     1.21491    0.121491 flux_ratio      1       0 
 121       8.49       nan nan  mag  JOHNSON.B         GCPD                         nan        nan         nan 4448.06 2.54162e-12 2.54162e-13 erg/s/cm2/AA     0       1 
 122       7.83       nan nan  mag  JOHNSON.U         GCPD                         nan        nan         nan 3641.75 3.08783e-12 3.08783e-13 erg/s/cm2/AA     0       1 
 123     -0.035       nan nan  mag  STROMGREN.M1      GCPD                         nan        nan         nan     nan    0.954224   0.0954224 flux_ratio      1       0 
 124      0.031       nan nan  mag  STROMGREN.C1      GCPD                         nan        nan         nan     nan     0.91257    0.091257 flux_ratio      1       0 
 125      0.259       nan nan  mag  STROMGREN.B-Y     GCPD                         nan        nan         nan     nan     1.25605    0.125605 flux_ratio      1       0 
 126     -0.009 0.0478853 nan  mag  2MASS.J-H         II/246/out                 0.118 -2.783e-05   1.715e-05     nan     2.68719    0.118516 flux_ratio      1       0 
 127     -0.033 0.0469574 nan  mag  2MASS.KS-H        II/246/out                 0.118 -2.783e-05   1.715e-05     nan    0.385286   0.0166634 flux_ratio      1       0 
 128      -0.01 0.0412311 nan  mag  JOHNSON.J-H       J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06     nan     2.73514    0.103867 flux_ratio      1       0 
 129      -0.04 0.0412311 nan  mag  JOHNSON.K-H       J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06     nan    0.375718    0.014268 flux_ratio      1       0 
 130      0.209 0.0206155 nan  mag  TYCHO2.BT-VT      I/259/tyc2                 0.042   7.17e-06    1.15e-06     nan     1.43014    0.027155 flux_ratio      1       0 
 131   
 132  Once a .phot file is written and L{get_photometry} is called again for the same 
 133  target, the script will B{not retrieve the photometry from the internet again}, 
 134  but will use the contents of the file instead. The purpose is minimizing network 
 135  traffic and maximizing speed. If you want to refresh the search, simply manually 
 136  delete the .phot file or set C{force=True} when calling L{get_photometry}. 
 137   
 138  The content of the .phot file is most easily read using the L{cc.ivs.io.ascii.read2recarray} 
 139  function. Be careful, as it contains both absolute fluxes as flux ratios. 
 140   
 141  >>> data = ascii.read2recarray('HD180642.phot') 
 142   
 143  Notice that in the C{.phot} files, also a C{comment} column is added. You can 
 144  find translation of some of the flags here (i.e. upper limit, extended source etc..), 
 145  or sometimes just additional remarks on variability etc. Not all catalogs have 
 146  this feature implemented, so you are still responsible yourself for checking 
 147  the quality of the photometry. 
 148   
 149  The references to each source are given in the C{bibtex} column. Simply call 
 150   
 151  >>> mysed.save_bibtex() 
 152   
 153  to convert those bibcodes to a C{.bib} file. 
 154   
 155  Using L{SED.plot_MW_side} and L{SED.plot_MW_top}, you can make a picture of where 
 156  your star is located with respect to the Milky Way and the Sun. With L{SED.plot_finderchart}, 
 157  you can check the location of your photometry, and also see if proper motions 
 158  etc are available. 
 159   
 160  Section 2. Where/what is my target? 
 161  =================================== 
 162   
 163  To give you some visual information on the target, the following plotting 
 164  procedure might be of some help. 
 165   
 166  To check whether the downloaded photometry is really belonging to the target, 
 167  instead of some neighbouring star (don't forget to set C{radius} when looking 
 168  for photometry!), you can generate a finderchart with the location of the 
 169  downloaded photometry overplotted. On top of that, proper motion data is added 
 170  when available, as well as radial velocity data. When a distance is available, 
 171  the proper motion velocity will be converted to a true tangential velocity. 
 172   
 173  >>> p = pl.figure();mysed.plot_finderchart(window_size=1) 
 174   
 175  ]]include figure]]ivs_sed_builder_finderchart.png] 
 176   
 177  To know the location of your target wrt the Milky Way (assuming your target is 
 178  in the milky way), you can call 
 179   
 180  >>> p = pl.figure();mysed.plot_MW_side() 
 181  >>> p = pl.figure();mysed.plot_MW_top() 
 182   
 183  ]]include figure]]ivs_sed_builder_MWside.png] 
 184   
 185  ]]include figure]]ivs_sed_builder_MWtop.png] 
 186   
 187  Section 3. SED fitting using a grid based approach 
 188  ================================================== 
 189   
 190  Subsection 3.1 Single star 
 191  -------------------------- 
 192   
 193  We make an SED of HD180642 by simply B{exploring a whole grid of Kurucz models} 
 194  (constructed via L{fit.generate_grid}, iterated over via L{fit.igrid_search} and 
 195  evaluated with L{fit.stat_chi2}). The model with the best parameters is picked 
 196  out and we make a full SED with those parameters. 
 197   
 198  >>> mysed = SED('HD180642') 
 199  >>> mysed.get_photometry() 
 200   
 201  Now we have collected B{all fluxes and colors}, but we do not want to use them 
 202  all: first, B{fluxes and colors are not independent}, so you probably want to use 
 203  either only absolute fluxes or only colors (plus perhaps one absolute flux per 
 204  system to compute the angular diameter) (see L{SED.set_photometry_scheme}). Second, 
 205  B{some photometry is better suited for fitting an SED than others}; typically IR 
 206  photometry does not add much to the fitting of massive stars, or it can be 
 207  contaminated with circumstellar material. Third, B{some photometry is not so 
 208  reliable}, i.e. the measurements can have large systematic uncertainties due to 
 209  e.g. calibration effects, which are typically not included in the error bars. 
 210   
 211  Currently, four standard schemes are implemented, which you can set via L{SED.set_photometry_scheme}: 
 212   
 213      1. C{absolute}: use only absolute values 
 214      2. C{colors}: use only colors (no angular diameter values calculated) 
 215      3. C{combo}: use all colors and one absolute value per photometric system 
 216      4. C{irfm}: (infrared flux method) use colors for wavelengths shorter than 
 217      infrared wavelengths, and absolute values for systems in the infrared. The 
 218      infrared is defined as wavelength longer than 1 micron, but this can be 
 219      customized with the keyword C{infrared=(value,unit)} in 
 220      L{SED.set_photometry_scheme}. 
 221   
 222  Here, we chose to use colors and one absolute flux per system, but exclude IR 
 223  photometry (wavelength range above 2.5 micron), and some systems and colors which 
 224  we know are not so trustworthy: 
 225   
 226  >>> mysed.set_photometry_scheme('combo') 
 227  >>> mysed.exclude(names=['STROMGREN.HBN-HBW','USNOB1','SDSS','DENIS','COUSINS','ANS','TD1'],wrange=(2.5e4,1e10)) 
 228   
 229  You can L{include}/L{exclude} photoemtry based on name, wavelength range, source and index, 
 230  and only select absolute photometry or colors (L{include_abs},L{include_colors}). 
 231  When working in interactive mode, in particular the index is useful. Print the 
 232  current set of photometry to the screen with 
 233   
 234  >>> print(photometry2str(mysed.master,color=True,index=True)) 
 235   
 236  and you will see in green the included photometry, and in red the excluded photometry. 
 237  You will see that each column is preceded by an index, you can use these indices 
 238  to select/deselect the photometry. 
 239   
 240  Speed up the fitting process by copying the model grids to the scratch disk 
 241   
 242  >>> model.copy2scratch(z='*') 
 243   
 244  Start the grid based fitting process and show some plots. We use 100000 randomly 
 245  distributed points over the grid: 
 246   
 247  >>> mysed.igrid_search(points=100000) 
 248   
 249  Delete the model grids from the scratch disk 
 250   
 251  >>> model.clean_scratch() 
 252   
 253  and make the plot 
 254   
 255  >>> p = pl.figure() 
 256  >>> p = pl.subplot(131);mysed.plot_sed() 
 257  >>> p = pl.subplot(132);mysed.plot_grid(limit=None) 
 258  >>> p = pl.subplot(133);mysed.plot_grid(x='ebv',y='z',limit=None) 
 259   
 260  ]]include figure]]ivs_sed_builder_example_fitting01.png] 
 261   
 262  The grid is a bit too coarse for our liking around the minimum, so we zoom in on 
 263  the results: 
 264   
 265  >>> teffrange = mysed.results['igrid_search']['CI']['teffL'],mysed.results['igrid_search']['CI']['teffU'] 
 266  >>> loggrange = mysed.results['igrid_search']['CI']['loggL'],mysed.results['igrid_search']['CI']['loggU'] 
 267  >>> ebvrange = mysed.results['igrid_search']['CI']['ebvL'],mysed.results['igrid_search']['CI']['ebvU'] 
 268  >>> mysed.igrid_search(points=100000,teffrange=teffrange,loggrange=loggrange,ebvrange=ebvrange) 
 269   
 270  and repeat the plot 
 271   
 272  >>> p = pl.figure() 
 273  >>> p = pl.subplot(131);mysed.plot_sed(plot_deredded=True) 
 274  >>> p = pl.subplot(132);mysed.plot_grid(limit=None) 
 275  >>> p = pl.subplot(133);mysed.plot_grid(x='ebv',y='z',limit=None) 
 276   
 277  ]]include figure]]ivs_sed_builder_example_fitting02.png] 
 278   
 279  You can automatically make plots of (most plotting functions take C{colors=True/False} 
 280  as an argument so you can make e.g. the 'color' SED and 'absolute value' SED): 
 281   
 282      1. the grid (see L{SED.plot_grid}) 
 283      2. the SED (see L{SED.plot_sed}) 
 284      3. the fit statistics (see L{SED.plot_chi2}) 
 285   
 286  To change the grid, load the L{cc.ivs.sed.model} module and call 
 287  L{cc.ivs.sed.model.set_defaults} with appropriate arguments. See that module for 
 288  conventions on the grid structure when adding custom grids. 
 289   
 290  To add arrays manually, i.e. not from the predefined set of internet catalogs, 
 291  use the L{SED.add_photometry_fromarrays} function. 
 292   
 293  B{Warning}: Be careful when interpreting the Chi2 results. In order to always have a 
 294  solution, the chi2 is rescaled so that the minimum equals 1, in the case the 
 295  probability of the best chi2-model is zero. The Chi2 rescaling factor I{f} mimicks 
 296  a rescaling of all errorbars with a factor I{sqrt(f)}, and does not discriminate 
 297  between systems (i.e., B{all} errors are blown up). If the errorbars are 
 298  underestimated, it could be that the rescaling factor is also wrong, which means 
 299  that the true probability region can be larger or smaller! 
 300   
 301  Subsection 3.2 Binary star 
 302  -------------------------- 
 303   
 304  The SED class can create SEDs for multiple stars as well. There are 2 options 
 305  available, the multiple SED fit which in theory can handle any number of stars, 
 306  and the binary SED fit which is for binaries only, and uses the mass of both 
 307  components to restrict the radii when combining two model SEDs. 
 308   
 309  As an example we take the system PG1104+243, which consists of a subdwarf B star, 
 310  and a G2 type mainsequence star. The photometry from the standard catalogues that 
 311  are build in this class, is of to low quality, so we use photometry obtained from 
 312  U{the subdwarf database<http://catserver.ing.iac.es/sddb/>}. 
 313   
 314  >>> mysed = SED('PG1104+243') 
 315   
 316  >>> meas, e_meas, units, photbands, source = ascii.read2array('pg1104+243_sddb.phot', dtype=str) 
 317  >>> meas = np.array(meas, dtype=float) 
 318  >>> e_meas = np.array(e_meas, dtype=float) 
 319  >>> mysed.add_photometry_fromarrays(meas, e_meas, units, photbands, source) 
 320   
 321  We use only the absolute fluxes 
 322   
 323  >>> mysed.set_photometry_scheme('abs') 
 324   
 325  For the main sequence component we use kurucz models with solar metalicity, and 
 326  for the sdB component tmap models. And we copy the model grids to the scratch disk 
 327  to speed up the process: 
 328   
 329  >>> grid1 = dict(grid='kurucz',z=+0.0) 
 330  >>> grid2 = dict(grid='tmap') 
 331  >>> model.set_defaults_multiple(grid1,grid2) 
 332  >>> model.copy2scratch() 
 333   
 334  The actual fitting. The second fit starts from the 95% probability intervals of 
 335  the first fit. 
 336   
 337  >>> teff_ms = (5000,7000) 
 338  >>> teff_sdb = (25000,45000) 
 339  >>> logg_ms = (4.00,4.50) 
 340  >>> logg_sdb = (5.00,6.50) 
 341  >>> mysed.igrid_search(masses=(0.47,0.71) ,teffrange=(teff_ms,teff_fix),loggrange=(logg_ms,logg_sdb), ebvrange=(0.00,0.02), zrange=(0,0), points=2000000, type='binary') 
 342  >>> mysed.igrid_search(masses=(0.47,0.71) ,points=2000000, type='binary') 
 343   
 344  Delete the used models from the scratch disk 
 345   
 346  >>> model.clean_scratch() 
 347   
 348  Plot the results 
 349   
 350  >>> p = pl.figure() 
 351  >>> p = pl.subplot(131); mysed.plot_sed(plot_deredded=False) 
 352  >>> p = pl.subplot(132); mysed.plot_grid(x='teff', y='logg', limit=0.95) 
 353  >>> p = pl.subplot(133); mysed.plot_grid(x='teff-2', y='logg-2', limit=0.95) 
 354   
 355  ]]include figure]]ivs_sed_builder_example_fittingPG1104+243.png] 
 356   
 357  Subsection 3.3 Saving SED fits 
 358  ------------------------------ 
 359   
 360  You can save all the data to a multi-extension FITS file via 
 361   
 362  >>> mysed.save_fits() 
 363   
 364  This FITS file then contains all B{measurements} (it includes the .phot file), 
 365  the B{resulting SED}, the B{confidence intervals of all parameters} and also the 
 366  B{whole fitted grid}: in the above case, the extensions of the FITS file contain 
 367  the following information (we print part of each header):: 
 368   
 369      EXTNAME = 'DATA    ' 
 370      XTENSION= 'BINTABLE'           / binary table extension 
 371      BITPIX  =                    8 / array data type 
 372      NAXIS   =                    2 / number of array dimensions 
 373      NAXIS1  =                  270 / length of dimension 1 
 374      NAXIS2  =                   67 / length of dimension 2 
 375      TTYPE1  = 'meas    ' 
 376      TTYPE2  = 'e_meas  ' 
 377      TTYPE3  = 'flag    ' 
 378      TTYPE4  = 'unit    ' 
 379      TTYPE5  = 'photband' 
 380      TTYPE6  = 'source  ' 
 381      TTYPE7  = '_r      ' 
 382      TTYPE8  = '_RAJ2000' 
 383      TTYPE9  = '_DEJ2000' 
 384      TTYPE10 = 'cwave   ' 
 385      TTYPE11 = 'cmeas   ' 
 386      TTYPE12 = 'e_cmeas ' 
 387      TTYPE13 = 'cunit   ' 
 388      TTYPE14 = 'color   ' 
 389      TTYPE15 = 'include ' 
 390      TTYPE16 = 'synflux ' 
 391      TTYPE17 = 'mod_eff_wave' 
 392      TTYPE18 = 'chi2    ' 
 393   
 394      EXTNAME = 'MODEL   ' 
 395      XTENSION= 'BINTABLE'           / binary table extension 
 396      BITPIX  =                    8 / array data type 
 397      NAXIS   =                    2 / number of array dimensions 
 398      NAXIS1  =                   24 / length of dimension 1 
 399      NAXIS2  =                 1221 / length of dimension 2 
 400      TTYPE1  = 'wave    ' 
 401      TUNIT1  = 'A       ' 
 402      TTYPE2  = 'flux    ' 
 403      TUNIT2  = 'erg/s/cm2/AA' 
 404      TTYPE3  = 'dered_flux' 
 405      TUNIT3  = 'erg/s/cm2/AA' 
 406      TEFFL   =    23000.68377498454 
 407      TEFF    =    23644.49138963689 
 408      TEFFU   =    29999.33189058337 
 409      LOGGL   =    3.014445328877565 
 410      LOGG    =    4.984788506855546 
 411      LOGGU   =    4.996095055525759 
 412      EBVL    =   0.4703171900728142 
 413      EBV     =   0.4933185398871652 
 414      EBVU    =   0.5645144879211454 
 415      ZL      =   -2.499929434601112 
 416      Z       =   0.4332336811329144 
 417      ZU      =   0.4999776537652627 
 418      SCALEL  = 2.028305798068863E-20 
 419      SCALE   = 2.444606991671813E-20 
 420      SCALEU  = 2.830281842143698E-20 
 421      LABSL   =    250.9613352757437 
 422      LABS    =     281.771013453664 
 423      LABSU   =    745.3149766772975 
 424      CHISQL  =    77.07958742733673 
 425      CHISQ   =    77.07958742733673 
 426      CHISQU  =    118.8587169011471 
 427      CI_RAWL =   0.9999999513255379 
 428      CI_RAW  =   0.9999999513255379 
 429      CI_RAWU =   0.9999999999999972 
 430      CI_RED  =   0.5401112973063139 
 431      CI_REDL =   0.5401112973063139 
 432      CI_REDU =   0.9500015229597392 
 433   
 434      EXTNAME = 'IGRID_SEARCH' 
 435      XTENSION= 'BINTABLE'           / binary table extension 
 436      BITPIX  =                    8 / array data type 
 437      NAXIS   =                    2 / number of array dimensions 
 438      NAXIS1  =                   80 / length of dimension 1 
 439      NAXIS2  =                99996 / length of dimension 2 
 440      TTYPE1  = 'teff    ' 
 441      TTYPE2  = 'logg    ' 
 442      TTYPE3  = 'ebv     ' 
 443      TTYPE4  = 'z       ' 
 444      TTYPE5  = 'chisq   ' 
 445      TTYPE6  = 'scale   ' 
 446      TTYPE7  = 'escale ' 
 447      TTYPE8  = 'Labs    ' 
 448      TTYPE9  = 'CI_raw  ' 
 449      TTYPE10 = 'CI_red  ' 
 450   
 451   
 452  Subsection 3.4 Loading SED fits 
 453  ------------------------------- 
 454   
 455  Once saved, you can load the contents of the FITS file again into an SED object 
 456  via 
 457   
 458  >>> mysed = SED('HD180642') 
 459  >>> mysed.load_fits() 
 460   
 461  and then you can build all the plots again easily. You can of course use the 
 462  predefined plotting scripts to start a plot, and then later on change the 
 463  properties of the labels, legends etc... for higher quality plots or to better 
 464  suit your needs. 
 465   
 466  Section 4. Accessing the best fitting full SED model 
 467  ==================================================== 
 468   
 469  You can access the full SED model that matches the parameters found by the 
 470  fitting routine via: 
 471   
 472  >>> wavelength,flux,deredded_flux = mysed.get_best_model() 
 473   
 474  Note that this model is retrieved after fitting, and was not in any way used 
 475  during the fitting. As a consequence, there could be small differences between 
 476  synthetic photometry calculated from this returned model and the synthetic 
 477  fluxes stored in C{mysed.results['igrid_search']['synflux']}, which is the 
 478  synthetic photometry coming from the interpolation of the grid of pre-interpolated 
 479  photometry. See the documentation of L{SED.get_model} for more information. 
 480   
 481  Section 5. Radii, distances and luminosities 
 482  ============================================ 
 483   
 484  Subsection 4.1. Relations between quantities 
 485  -------------------------------------------- 
 486   
 487  Most SED grids don't have the radius as a tunable model parameter. The scale 
 488  factor, which is available for all fitted models in the grid when at least one 
 489  absolute photometric point is included, is directly propertional to the angular 
 490  diameter. The following relations hold:: 
 491   
 492      >> distance = radius / np.sqrt(scale) 
 493      >> radius = distance * np.sqrt(scale) 
 494   
 495  Where C{radius} and C{distance} have equal units. The true absolute luminosity 
 496  (solar units) is related to the absolute luminosity from the SED (solar units) 
 497  via the radius (solar units):: 
 498   
 499      >> L_abs_true = L_abs * radius**2 
 500   
 501  Finally, the angular diameter can be computed via:: 
 502   
 503      >> 2*conversions.convert('sr','mas',scale) 
 504   
 505  Subsection 4.2. Seismic constraints 
 506  ----------------------------------- 
 507   
 508  If the star shows clear solar-like oscillations, you can use the nu_max give an 
 509  independent constraint on the surface gravity, given the effective temperature of 
 510  the model (you are free to give errors or not, just keep in mind that in the 
 511  former case, you will get an array of Uncertainties rather than floats):: 
 512   
 513      >> teff = 4260.,'K' 
 514      >> nu_max = 38.90,0.86,'muHz' 
 515      >> logg_slo = conversions.derive_logg_slo(teff,nu_max,unit='[cm/s2']) 
 516   
 517  If you have the large separation (l=0 modes) and the nu_max, you can get an 
 518  estimate of the radius:: 
 519   
 520      >> Deltanu0 = 4.80,0.02,'muHz' 
 521      >> R_slo = conversions.derive_radius_slo(numax,Deltanu0,teff,unit='Rsol') 
 522   
 523  Then from the radius, you can get both the absolute luminosity and distance to 
 524  your star. 
 525   
 526  Subsection 4.3. Parallaxes 
 527  -------------------------- 
 528   
 529  From the parallax, you can get an estimate of the distance. This is, however, 
 530  dependent on some prior assumptions such as the shape of the galaxy and the 
 531  distribution of stars. To estimate the probability density value due to 
 532  a measured parallax of a star at particular distance, you can call L{distance.distprob}:: 
 533   
 534      >> gal_latitude = 0.5 
 535      >> plx = 3.14,0.5 
 536      >> d_prob = distance.distprob(d,gal_latitude,plx) 
 537   
 538  Using this distance, you can get an estimate for the radius of your star, and 
 539  thus also the absolute luminosity. 
 540   
 541  Subsection 4.4: Reddening constraints 
 542  ------------------------------------- 
 543   
 544  An estimate of the reddening of a star at a particular distance may be obtained 
 545  with the L{extinctionmodels.findext} function. There are currently three 
 546  reddening maps available: Drimmel, Marshall and Arenou:: 
 547   
 548      >> lng,lat = 120.,-0.5 
 549      >> dist = 100. # pc 
 550      >> Rv = 3.1 
 551      >> EBV = extinctionmodels.findext(lng,lat,model='drimmel',distance=dist)/Rv 
 552      >> EBV = extinctionmodels.findext(lng,lat,model='marshall',distance=dist)/Rv 
 553      >> EBV = extinctionmodels.findext(lng,lat,model='arenou',distance=dist)/Rv 
 554   
 555   
 556  """ 
 557  import re 
 558  import sys 
 559  import time 
 560  import os 
 561  import logging 
 562  import itertools 
 563  import glob 
 564  import json 
 565   
 566  import pylab as pl 
 567  from matplotlib import mlab 
 568  try: 
 569      import image 
 570  except ImportError: 
 571      import Image as image 
 572  import numpy as np 
 573  import scipy.stats 
 574  from scipy.interpolate import Rbf 
 575  from astropy.io import fits as pyfits 
 576   
 577  import cc.path 
 578  #from ivs import config 
 579  from cc.ivs.aux import numpy_ext 
 580  from cc.ivs.aux import termtools 
 581  from cc.ivs.aux.decorators import memoized,clear_memoization 
 582  from cc.ivs.io import ascii 
 583  from cc.ivs.io import fits 
 584  from cc.ivs.io import hdf5 
 585  from cc.ivs.sed import model 
 586  from cc.ivs.sed import filters 
 587  from cc.ivs.sed import fit 
 588  from cc.ivs.sed import distance 
 589  from cc.ivs.sed import extinctionmodels 
 590  from cc.ivs.sed.decorators import standalone_figure 
 591  from cc.ivs.tools import spectra_combine 
 592  from cc.ivs.catalogs import crossmatch 
 593  from cc.ivs.catalogs import vizier 
 594  from cc.ivs.catalogs import mast 
 595  from cc.ivs.catalogs import sesame 
 596  from cc.ivs.catalogs import corot 
 597  from cc.ivs.units import conversions 
 598  from cc.ivs.units import constants 
 599  from cc.ivs.units.uncertainties import unumpy,ufloat 
 600  from cc.ivs.units.uncertainties.unumpy import sqrt as usqrt 
 601  from cc.ivs.units.uncertainties.unumpy import tan as utan 
 602  from cc.ivs.sigproc import evaluate 
 603  # try: 
 604  #     from ivs.stellar_evolution import evolutionmodels 
 605  # except ImportError: 
 606  #     print("Warning: no evolution models available (probably not important)") 
 607   
 608  logger = logging.getLogger("SED.BUILD") 
609 #logger.setLevel(10) 610 611 -def fix_master(master,e_default=None):
612 """ 613 Clean/extend/fix record array received from C{get_photometry}. 614 615 This function does a couple of things: 616 617 1. Adds common but uncatalogized colors like 2MASS.J-H if not already 618 present. WARNING: these colors can mix values from the same system 619 but from different catalogs! 620 2. Removes photometry for which no calibration is available 621 3. Adds a column 'color' with flag False denoting absolute flux measurement 622 and True denoting color. 623 4. Adds a column 'include' with flag True meaning the value will be 624 included in the fit 625 5. Sets some default errors to photometric values, for which we know 626 that the catalog values are not trustworthy. 627 6. Sets a lower limit to the allowed errors of 1%. Errors below this 628 value are untrostworthy because the calibration error is larger than that. 629 7. USNOB1 and ANS photometry are set the have a minimum error of 30% 630 due to uncertainties in response curves. 631 632 @param master: record array containing photometry. This should have the fields 633 'meas','e_meas','unit','photband','source','_r','_RAJ2000','DEJ2000', 634 'cmeas','e_cmeas','cwave','cunit' 635 @type master: record array 636 @param e_default: default error for measurements without errors 637 @type e_default: float 638 @return: record array extended with fields 'include' and 'color', and with 639 rows added (see above description) 640 @rtype: numpy recard array 641 """ 642 #-- we recognize uncalibrated stuff as those for which no absolute flux was 643 # obtained, and remove them: 644 master = master[-np.isnan(master['cmeas'])] 645 cats = np.array([ph.split('.')[0] for ph in master['source']]) 646 set_cats = sorted(set(cats)) 647 #-- add common uncatalogized colors: 648 columns = list(master.dtype.names) 649 #-- if the separate bands are available (and not the color itself), 650 # calculate the colors here. We need to do this for every separate 651 # system! 652 add_rows = [] 653 for cat in set_cats: 654 master_ = master[cats==cat] 655 656 for color in ['2MASS.J-H','2MASS.KS-H','TD1.1565-1965','TD1.2365-1965','TD1.2365-2740', 657 'JOHNSON.J-H','JOHNSON.K-H','JOHNSON.B-V','JOHNSON.U-B','JOHNSON.R-V','JOHNSON.I-V', 658 'GALEX.FUV-NUV','TYCHO2.BT-VT','WISE.W1-W2','WISE.W3-W2','WISE.W4-W3', 659 'SDSS.U-G','SDSS.G-R','SDSS.R-I','SDSS.R-Z', 660 'UVEX.U-G','UVEX.G-R','UVEX.R-I','UVEX.HA-I', 661 'DENIS.I-J','DENIS.KS-J', 662 'ANS.15N-15W','ANS.15W-18','ANS.18-22','ANS.22-25','ANS.25-33']: 663 #-- get the filter system and the separate bands for these colors 664 system,band = color.split('.') 665 band0,band1 = band.split('-') 666 band0,band1 = '%s.%s'%(system,band0),'%s.%s'%(system,band1) 667 668 if band0 in master_['photband'] and band1 in master_['photband'] and not color in master_['photband']: 669 #-- where are the bands located? 670 index0 = list(master_['photband']).index(band0) 671 index1 = list(master_['photband']).index(band1) 672 673 #-- start a new row to add the color 674 row = list(master_[index1]) 675 row[columns.index('photband')] = color 676 row[columns.index('cwave')] = np.nan 677 678 #-- it could be a magnitude difference 679 if master_['unit'][index0]=='mag': 680 row[columns.index('meas')] = master_['meas'][index0]-master_['meas'][index1] 681 row[columns.index('e_meas')] = np.sqrt(master_['e_meas'][index0]**2+master_['e_meas'][index1]**2) 682 #-- error will not always be available... 683 try: 684 row[columns.index('cmeas')],row[columns.index('e_cmeas')] = conversions.convert('mag_color','flux_ratio',row[columns.index('meas')],row[columns.index('e_meas')],photband=color) 685 except AssertionError: 686 row[columns.index('cmeas')] = conversions.convert('mag_color','flux_ratio',row[columns.index('meas')],photband=color) 687 row[columns.index('e_cmeas')] = np.nan 688 #-- or it could be a flux ratio 689 else: 690 row[columns.index('meas')] = master_['meas'][index0]/master_['meas'][index1] 691 row[columns.index('e_meas')] = np.sqrt(((master_['e_meas'][index0]/master_['meas'][index0])**2+(master_['e_meas'][index1]/master_['meas'][index1])**2)*row[columns.index('meas')]**2) 692 row[columns.index('cmeas')],row[columns.index('e_cmeas')] = row[columns.index('meas')],row[columns.index('e_meas')] 693 row[columns.index('cunit')] = 'flux_ratio' 694 add_rows.append(tuple(row)) 695 master = numpy_ext.recarr_addrows(master,add_rows) 696 697 #-- add an extra column with a flag to distinguish colors from absolute 698 # fluxes, and a column with flags to include/exclude photometry 699 # By default, exclude photometry if the effective wavelength is above 700 # 150 mum or if the measurement is nan 701 if not 'color' in master.dtype.names: 702 extra_cols = [[filters.is_color(photband) for photband in master['photband']], 703 [(cwave<150e4) or np.isnan(meas) for cwave,meas in zip(master['cwave'],master['cmeas'])]] 704 dtypes = [('color',np.bool),('include',np.bool)] 705 master = numpy_ext.recarr_addcols(master,extra_cols,dtypes) 706 else: 707 iscolor = [filters.is_color(photband) for photband in master['photband']] 708 master['color'] = iscolor 709 #-- set default errors if no errors are available and set really really 710 # small errors to some larger default value 711 if e_default is not None: 712 no_error = np.isnan(master['e_cmeas']) 713 master['e_cmeas'][no_error] = np.abs(e_default*master['cmeas'][no_error]) 714 small_error = master['e_cmeas']<(0.01*np.abs(master['cmeas'])) 715 master['e_cmeas'][small_error] = 0.01*np.abs(master['cmeas'][small_error]) 716 717 #-- the measurements from USNOB1 are not that good because the response 718 # curve is approximated by the JOHNSON filter. Set the errors to min 30% 719 # The same holds for ANS: here, the transmission curves are very uncertain 720 for i,photband in enumerate(master['photband']): 721 if 'USNOB1' in photband or 'ANS' in photband: 722 master['e_cmeas'][i] = max(0.30*master['cmeas'][i],master['e_cmeas'][i]) 723 724 #-- remove negative fluxes 725 master = master[master['cmeas']>0] 726 727 return master
728
729 730 -def decide_phot(master,names=None,wrange=None,sources=None,indices=None,ptype='all',include=False):
731 """ 732 Exclude/include photometric passbands containing one of the strings listed in 733 photbands. 734 735 This function will set the flags in the 'include' field of the extended 736 master record array. 737 738 Colours also have a wavelength in this function, and it is defined as the 739 average wavelength of the included bands (doesn't work for stromgren C1 and M1). 740 741 include=False excludes photometry based on the input parameters. 742 include=True includes photometry based on the input parameters. 743 744 Some examples: 745 746 1. Exclude all measurements:: 747 748 >> decide_phot(master,wrange=(-np.inf,+np.inf),ptype='all',include=False) 749 750 2. Include all TD1 fluxes and colors:: 751 752 >> decide_phot(master,names=['TD1'],ptype='all',include=True) 753 754 3. Include all V band measurements from all systems (but not the colors):: 755 756 >> decide_phot(master,names=['.V'],ptype='abs',include=True) 757 758 4. Include all Geneva colors and exclude Geneva magnitudes:: 759 760 >> decide_phot(master,names=['GENEVA'],ptype='col',include=True) 761 >> decide_phot(master,names=['GENEVA'],ptype='abs',include=False) 762 763 5. Exclude all infrared measurements beyond 1 micron:: 764 765 >> decide_phot(master,wrange=(1e4,np.inf),ptype='all',include=False) 766 767 6. Include all AKARI measurements below 10 micron:: 768 769 >> decide_phot(master,names=['AKARI'],wrange=(-np.inf,1e5),ptype='all',include=True) 770 771 @param master: record array containing all photometry 772 @type master: numpy record array 773 @param names: strings excerpts to match filters 774 @type names: list of strings 775 @param wrange: wavelength range (most likely angstrom) to include/exclude 776 @type wrange: 2-tuple (start wavelength,end wavelength) 777 @param sources: list of sources 778 @type sources: list of strings 779 @param indices: list of indices (integers) 780 @type indices: list of integers 781 @param ptype: type of photometry to include/exclude: absolute values, colors 782 or both 783 @type ptype: string, one of 'abs','col','all' 784 @param include: flag setting exclusion or inclusion 785 @type include: boolean 786 @return: master record array with adapted 'include' flags 787 @rtype: numpy record array 788 """ 789 #-- exclude/include passbands based on their names 790 if names is not None: 791 logger.info('%s photometry based on photband containining one of %s'%((include and 'Include' or "Exclude"),names)) 792 for index,photband in enumerate(master['photband']): 793 for name in names: 794 if name in photband: 795 if ptype=='all' or (ptype=='abs' and -master['color'][index]) or (ptype=='col' and master['color'][index]): 796 master['include'][index] = include 797 break 798 #-- exclude/include colors based on their wavelength 799 if wrange is not None: 800 logger.info('%s photometry based on photband wavelength between %s'%((include and 'Include' or "Exclude"),wrange)) 801 for index,photband in enumerate(master['photband']): 802 system,color = photband.split('.') 803 if not '-' in color or ptype=='abs': 804 continue 805 band0,band1 = color.split('-') 806 cwave0 = filters.eff_wave('%s.%s'%(system,band0)) 807 cwave1 = filters.eff_wave('%s.%s'%(system,band1)) 808 if (wrange[0]<cwave0<wrange[1]) or (wrange[0]<cwave0<wrange[1]): 809 master['include'][index] = include 810 #-- exclude/include passbands based on their wavelength 811 if not ptype=='col': 812 master['include'][(wrange[0]<master['cwave']) & (master['cwave']<wrange[1])] = include 813 #-- exclude/include passbands based on their source 814 if sources is not None: 815 logger.info('%s photometry based on source catalog in %s'%((include and 'Include' or "Exclude"),sources)) 816 for index,msource in enumerate(master['source']): 817 for source in sources: 818 if source==msource: 819 if ptype=='all' or (ptype=='abs' and -master['color'][index]) or (ptype=='col' and master['color'][index]): 820 master['include'][index] = include 821 break 822 #-- exclude/include passbands based on their index 823 if indices is not None: 824 logger.info('%s photometry based on index'%((include and 'Include' or "Exclude"))) 825 indices = np.array(indices,int) 826 if not indices.shape: indices = indices.reshape(1) 827 master['include'][indices] = include
828
829 830 -def photometry2str(master,comment='',sort='photband',color=False,index=False):
831 """ 832 String representation of master record array. 833 834 Sorting is disabled when C{index=True}. 835 836 @param master: master record array containing photometry 837 @type master: numpy record array 838 """ 839 if sort and not index: 840 master = master[np.argsort(master[sort])] 841 842 templateh = '{:20s} {:>12s} {:>12s} {:12s} {:>10s} {:>12s} {:>12s} {:12s} {:30s}' 843 templated = '{:20s} {:12g} {:12g} {:12s} {:10.0f} {:12g} {:12g} {:12s} {:30s}' 844 header = ['PHOTBAND','MEAS','E_MEAS','UNIT','CWAVE','CMEAS','E_CMEAS','CUNIT','SOURCE'] 845 if 'comments' in master.dtype.names: 846 templateh += ' {:s}' 847 templated += ' {:s}' 848 header += ['COMMENTS'] 849 if index: 850 templateh = '{:3s} '+templateh 851 templated = '{:3d} '+templated 852 header = ['NR']+header 853 854 txt = [comment+templateh.format(*header)] 855 txt.append(comment+'='*170) 856 columns = [master[col.lower()] for col in header if not col=='NR'] 857 for nr,contents in enumerate(zip(*columns)): 858 contents = list(contents) 859 if 'comments' in master.dtype.names: 860 contents[-1] = contents[-1].replace('_',' ') 861 if index: 862 contents = [nr] + contents 863 line = templated.format(*contents) 864 if color: 865 mycolor = termtools.green if master['include'][nr] else termtools.red 866 line = mycolor(line) 867 txt.append(comment + line) 868 return "\n".join(txt)
869
870 #@memoized 871 #def get_schaller_grid(): 872 #""" 873 #Download Schaller 1992 evolutionary tracks and return an Rbf interpolation 874 #function. 875 876 #@return: Rbf interpolation function 877 #@rtype: Rbf interpolation function 878 #""" 879 ##-- translation between table names and masses 880 ##masses = [1,1.25,1.5,1.7,2,2.5,3,4,5,7,9,12,15,20,25,40,60][:-1] 881 ##tables = ['table20','table18','table17','table16','table15','table14', 882 ## 'table13','table12','table11','table10','table9','table8', 883 ## 'table7','table6','table5','table4','table3'][:-1] 884 ##-- read in all the tables and compute radii and luminosities. 885 #data,comms,units = vizier.search('J/A+AS/96/269/models') 886 #all_teffs = 10**data['logTe'] 887 #all_radii = np.sqrt((10**data['logL']*constants.Lsol_cgs)/(10**data['logTe'])**4/(4*np.pi*constants.sigma_cgs)) 888 #all_loggs = np.log10(constants.GG_cgs*data['Mass']*constants.Msol_cgs/(all_radii**2)) 889 #all_radii /= constants.Rsol_cgs 890 ##-- remove low temperature models, the evolutionary tracks are hard to 891 ## interpolate there. 892 #keep = all_teffs>5000 893 #all_teffs = all_teffs[keep] 894 #all_radii = all_radii[keep] 895 #all_loggs = all_loggs[keep] 896 ##-- make linear interpolation model between all modelpoints 897 #mygrid = Rbf(np.log10(all_teffs),all_loggs,all_radii,function='linear') 898 #logger.info('Interpolation of Schaller 1992 evolutionary tracks to compute radii') 899 #return mygrid 900 901 902 903 904 905 906 907 #def get_radii(teffs,loggs): 908 #""" 909 #Retrieve radii from stellar evolutionary tracks from Schaller 1992. 910 911 #@param teffs: model effective temperatures 912 #@type teffs: numpy array 913 #@param loggs: model surface gravities 914 #@type loggs: numpy array 915 #@return: model radii (solar units) 916 #@rtype: numpy array 917 #""" 918 #mygrid = get_schaller_grid() 919 #radii = mygrid(np.log10(teffs),loggs) 920 #return radii 921 922 923 #def calculate_distance(plx,gal,teffs,loggs,scales,n=75000): 924 #""" 925 #Calculate distances and radii of a target given its parallax and location 926 #in the galaxy. 927 928 929 #""" 930 ##-- compute distance up to 25 kpc, which is about the maximum distance from 931 ## earth to the farthest side of the Milky Way galaxy 932 ## rescale to set the maximum to 1 933 #d = np.logspace(np.log10(0.1),np.log10(25000),100000) 934 #if plx is not None: 935 #dprob = distance.distprob(d,gal[1],plx) 936 #dprob = dprob / dprob.max() 937 #else: 938 #dprob = np.ones_like(d) 939 ##-- compute the radii for the computed models, and convert to parsec 940 ##radii = np.ones(len(teffs[-n:])) 941 #radii = get_radii(teffs[-n:],loggs[-n:]) 942 #radii = conversions.convert('Rsol','pc',radii) 943 #d_models = radii/np.sqrt(scales[-n:]) 944 ##-- we set out of boundary values to zero 945 #if plx is not None: 946 #dprob_models = np.interp(d_models,d[-n:],dprob[-n:],left=0,right=0) 947 #else: 948 #dprob_models = np.ones_like(d_models) 949 ##-- reset the radii to solar units for return value 950 #radii = conversions.convert('pc','Rsol',radii) 951 #return (d_models,dprob_models,radii),(d,dprob) 952 953 954 955 -class SED(object):
956 """ 957 Class that facilitates the use of the cc.ivs.sed module. 958 959 This class is meant to be an easy interface to many of the cc.ivs.sed 960 module's functionality. 961 962 The most important attributes of SED are: 963 964 1. C{sed.ID}: star's identification (str) 965 2. C{sed.photfile}: name of the file containing all photometry (str) 966 3. C{sed.info}: star's information from Simbad (dict) 967 4. C{sed.master}: photometry data (record array) 968 5. C{sed.results}: results and summary of the fitting process (dict) 969 970 After fitting, e.g. via calling L{igrid_search}, you can call L{get_model} 971 to retrieve the full SED matching the best fitting parameters (or, rather, 972 closely matching them, see the documentation). 973 974 """
975 - def __init__(self,ID=None,photfile=None,plx=None,load_fits=True,load_hdf5=True,label=''):
976 """ 977 Initialize SED class. 978 979 Different ways to initialize: 980 981 B{1. If no previous data are saved:} 982 983 >>> mysed = SED('HD129929') 984 985 B{2. If previous data exists:} 986 987 >>> #mysed = SED(photfile='HD129929.phot') # will set ID with 'oname' field from SIMBAD 988 >>> #mysed = SED(ID='bla', photfile='HD129929.phot') # Sets custom ID 989 990 The C{ID} variable is used internally to look up data, so it should be 991 something SIMBAD understands and that designates the target. 992 993 @param plx: parallax (and error) of the object 994 @type plx: tuple (plx,e_plx) 995 """ 996 self.ID = ID 997 self.label = label 998 self.info = {} 999 #-- the file containing photometry should have the following name. We 1000 # save photometry to a file to avoid having to download photometry 1001 # each time from the internet 1002 if photfile is None: 1003 self.photfile = '%s.phot'%(ID).replace(' ','_') 1004 #-- keep information on the star from SESAME, but override parallax with 1005 # the value from Van Leeuwen's new reduction. Set the galactic 1006 # coordinates, which are not available from SESAME. 1007 else: 1008 self.photfile = photfile 1009 1010 #-- load information from the photometry file if it exists 1011 if not os.path.isfile(self.photfile): 1012 try: 1013 self.info = sesame.search(os.path.basename(ID),fix=True) 1014 except KeyError: 1015 logger.warning('Star %s not recognised by SIMBAD'%(os.path.basename(ID))) 1016 try: 1017 self.info = sesame.search(os.path.basename(ID),db='N',fix=True) 1018 logger.info('Star %s recognised by NED'%(os.path.basename(ID))) 1019 except KeyError: 1020 logger.warning('Star %s not recognised by NED'%(os.path.basename(ID))) 1021 #-- final attempt: if it's a CoRoT star, try the catalog 1022 if 'corot' in ID.lower() and not 'galpos' in self.info: 1023 corot_id = int("".join([char for char in ID if char.isdigit()])) 1024 try: 1025 self.info['jradeg'],self.info['jdedeg'] = ra,dec = corot.resolve(corot_id) 1026 gal = conversions.convert('equatorial','galactic',(ra,dec),epoch='2000') 1027 self.info['galpos'] = float(gal[0])/np.pi*180,float(gal[1])/np.pi*180 1028 logger.info('Resolved position via CoRoT EXO catalog') 1029 except: 1030 logger.warning("Star %s not recognised by CoRoT"%(os.path.basename(ID))) 1031 1032 if plx is not None: 1033 if not 'plx' in self.info: 1034 self.info['plx'] = {} 1035 self.info['plx']['v'] = plx[0] 1036 self.info['plx']['e'] = plx[1] 1037 else: 1038 self.load_photometry() 1039 #-- if no ID was given, set the official name as the ID. 1040 if self.ID is None: 1041 self.ID = os.path.splitext(os.path.basename(self.photfile))[0]#self.info['oname'] 1042 #--load information from the FITS file if it exists 1043 self.results = {} 1044 self.constraints = {} 1045 if load_fits: 1046 self.load_fits() 1047 if load_hdf5: 1048 self.load_hdf5() 1049 1050 #-- prepare for information on fitting processes 1051 self.CI_limit = 0.95
1052
1053 - def __repr__(self):
1054 """ 1055 Machine readable string representation of an SED object. 1056 """ 1057 return "SED('{}')".format(self.ID)
1058
1059 - def __str__(self):
1060 """ 1061 Human readable string representation of an SED object. 1062 """ 1063 txt = [] 1064 #-- object designation 1065 txt.append("Object identification: {:s}".format(self.ID)) 1066 if hasattr(self,'info') and self.info and 'oname' in self.info: 1067 txt.append("Official designation: {:s}".format(self.info['oname'])) 1068 #-- additional info 1069 for key in sorted(self.info.keys()): 1070 if isinstance(self.info[key],dict): 1071 txt.append(" {:10s} = ".format(key)+", ".join(["{}: {}".format(i,j) for i,j in self.info[key].iteritems()])) 1072 else: 1073 txt.append(" {:10s} = {}".format(key,self.info[key])) 1074 1075 #txt.append("Included photometry:") 1076 #if hasattr(self,'master') and self.master is not None: 1077 #include_grid = self.master['include'] 1078 #txt.append(photometry2str(self.master[include_grid])) 1079 #txt.append("Excluded photometry:") 1080 #if hasattr(self,'master') and self.master is not None: 1081 #include_grid = self.master['include'] 1082 #txt.append(photometry2str(self.master[-include_grid])) 1083 if hasattr(self,'master'): 1084 txt.append(photometry2str(self.master,color=True)) 1085 return "\n".join(txt)
1086 1087 #{ Handling photometric data
1088 - def get_photometry(self,radius=None,ra=None,dec=None, 1089 include=None,exclude=None,force=False, 1090 units='erg/s/cm2/AA'):
1091 """ 1092 Search photometry on the net or from the phot file if it exists. 1093 1094 For bright stars, you can set radius a bit higher... 1095 1096 @param radius: search radius (arcseconds) 1097 @type radius: float. 1098 """ 1099 if radius is None: 1100 if 'mag.V.v' in self.info and self.info['mag.V.v']<6.: 1101 radius = 60. 1102 else: 1103 radius = 10. 1104 if not os.path.isfile(self.photfile) or force: 1105 #-- get and fix photometry. Set default errors to 1%, and set 1106 # USNOB1 errors to 3% 1107 if ra is None and dec is None: 1108 master = crossmatch.get_photometry(ID=os.path.basename(self.ID),radius=radius, 1109 include=include,exclude=exclude,to_units=units, 1110 extra_fields=['_r','_RAJ2000','_DEJ2000']) # was radius=3. 1111 else: 1112 master = crossmatch.get_photometry(ID=os.path.basename(self.ID),ra=ra,dec=dec,radius=radius, 1113 include=include,exclude=exclude,to_units=units, 1114 extra_fields=['_r','_RAJ2000','_DEJ2000']) # was radius=3. 1115 if 'jradeg' in self.info: 1116 master['_RAJ2000'] -= self.info['jradeg'] 1117 master['_DEJ2000'] -= self.info['jdedeg'] 1118 1119 #-- fix the photometry: set default errors to 2% and print it to the 1120 # screen 1121 self.master = fix_master(master,e_default=0.1) 1122 logger.info('\n'+photometry2str(master)) 1123 1124 #-- write to file 1125 self.save_photometry()
1126
1127 - def get_spectrophotometry(self,directory=None,force_download=False):
1128 """ 1129 Retrieve and combine spectra. 1130 1131 B{WARNING:} this function creates FUSE and DIR directories! 1132 """ 1133 if directory is None and os.path.dirname(self.ID) == '': 1134 directory = os.getcwd() 1135 elif os.path.dirname(self.ID) != '': 1136 directory = os.path.dirname(self.ID) 1137 1138 if not os.path.isdir(directory): 1139 os.mkdir(directory) 1140 1141 #-- add spectrophotometric filters to the set 1142 photbands = filters.add_spectrophotometric_filters(R=200,lambda0=950,lambdan=3350) 1143 if hasattr(self,'master') and self.master is not None and not force_download: 1144 if any(['BOXCAR' in photband for photband in self.master['photband']]): 1145 return None 1146 #-- FUSE spectra 1147 fuse_direc = os.path.join(directory,'FUSE') 1148 iue_direc = os.path.join(directory,'IUE') 1149 if not os.path.isdir(fuse_direc) or force_download: 1150 out1 = mast.get_FUSE_spectra(ID=os.path.basename(self.ID),directory=fuse_direc,select=['ano']) 1151 if out1 is None: 1152 out1 = [] 1153 else: 1154 out1 = glob.glob(fuse_direc+'/*') 1155 #-- IUE spectra 1156 if not os.path.isdir(iue_direc) or force_download: 1157 out2 = vizier.get_IUE_spectra(ID=os.path.basename(self.ID),directory=iue_direc,select='lo') 1158 if out2 is None: 1159 out2 = [] 1160 else: 1161 out2 = glob.glob(iue_direc+'/*') 1162 #-- read them in to combine 1163 list_of_spectra = [fits.read_fuse(ff)[:3] for ff in out1] 1164 list_of_spectra+= [fits.read_iue(ff)[:3] for ff in out2] 1165 #-- combine 1166 wave,flux,err,nspec = tools.combine(list_of_spectra) 1167 #-- add to the master 1168 N = len(flux) 1169 units = ['erg/s/cm2/AA']*N 1170 source = ['specphot']*N 1171 self.add_photometry_fromarrays(flux,err,units,photbands,source,flags=nspec) 1172 1173 #-- write to file 1174 self.master = fix_master(self.master,e_default=0.1) 1175 logger.info('\n'+photometry2str(self.master)) 1176 self.save_photometry()
1177 1178 1179 1180
1181 - def exclude(self,names=None,wrange=None,sources=None,indices=None):
1182 """ 1183 Exclude (any) photometry from fitting process. 1184 1185 If called without arguments, all photometry will be excluded. 1186 """ 1187 if names is None and wrange is None and sources is None and indices is None: 1188 wrange = (-np.inf,np.inf) 1189 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=False,ptype='all')
1190
1191 - def exclude_colors(self,names=None,wrange=None,sources=None,indices=None):
1192 """ 1193 Exclude (color) photometry from fitting process. 1194 """ 1195 if names is None and wrange is None and sources is None and indices is None: 1196 wrange = (-np.inf,0) 1197 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=False,ptype='col')
1198
1199 - def exclude_abs(self,names=None,wrange=None,sources=None,indices=None):
1200 """ 1201 Exclude (absolute) photometry from fitting process. 1202 """ 1203 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=False,ptype='abs')
1204 1205
1206 - def include(self,names=None,wrange=None,sources=None,indices=None):
1207 """ 1208 Include (any) photometry in fitting process. 1209 """ 1210 if names is None and wrange is None and sources is None and indices is None: 1211 wrange = (-np.inf,np.inf) 1212 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=True,ptype='all')
1213
1214 - def include_colors(self,names=None,wrange=None,sources=None,indices=None):
1215 """ 1216 Include (color) photometry in fitting process. 1217 """ 1218 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=True,ptype='col')
1219
1220 - def include_abs(self,names=None,wrange=None,sources=None,indices=None):
1221 """ 1222 Include (absolute) photometry in fitting process. 1223 """ 1224 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=True,ptype='abs')
1225
1226 - def set_photometry_scheme(self,scheme,infrared=(1,'micron')):
1227 """ 1228 Set a default scheme of colors/absolute values to fit the SED. 1229 1230 Possible values: 1231 1232 1. scheme = 'abs': means excluding all colors, including all absolute values 1233 2. scheme = 'color': means including all colors, excluding all absolute values 1234 3. scheme = 'combo': means inculding all colors, and one absolute value per 1235 system (the one with the smallest relative error) 1236 4. scheme = 'irfm': means mimic infrared flux method: choose absolute values 1237 in the infrared (define with C{infrared}), and colours in the optical 1238 1239 @param infrared: definition of start of infrared for infrared flux method 1240 @type infrared: tuple (value <float>, unit <str>) 1241 """ 1242 #-- only absolute values: real SED fitting 1243 if 'abs' in scheme.lower(): 1244 self.master['include'][self.master['color']] = False 1245 self.master['include'][-self.master['color']] = True 1246 logger.info('Fitting procedure will use only absolute fluxes (%d)'%(sum(self.master['include']))) 1247 #-- only colors: color fitting 1248 elif 'col' in scheme.lower(): 1249 self.master['include'][self.master['color']] = True 1250 self.master['include'][-self.master['color']] = False 1251 logger.info('Fitting procedure will use only colors (%d)'%(sum(self.master['include']))) 1252 #-- combination: all colors and one absolute value per system 1253 elif 'com' in scheme.lower(): 1254 self.master['include'][self.master['color'] & self.master['include']] = True 1255 systems = np.array([photband.split('.')[0] for photband in self.master['photband']]) 1256 set_systems = sorted(list(set(systems))) 1257 for isys in set_systems: 1258 keep = -self.master['color'] & (isys==systems) & self.master['include'] 1259 if not sum(keep): continue 1260 index = np.argmax(self.master['e_cmeas'][keep]/self.master['cmeas'][keep]) 1261 index = np.arange(len(self.master))[keep][index] 1262 self.master['include'][keep] = False 1263 self.master['include'][index] = True 1264 logger.info('Fitting procedure will use colors + one absolute flux for each system (%d)'%(sum(self.master['include']))) 1265 #-- infrared flux method scheme 1266 elif 'irfm' in scheme.lower(): 1267 #-- check which measurements are in the infrared 1268 for i,meas in enumerate(self.master): 1269 #-- if measurement is in the infrared and it is a color, remove it (only use absolute values in IR) 1270 #-- for colors, make sure *both* wavelengths are checked 1271 if meas['color']: 1272 bands = filters.get_color_photband(meas['photband']) 1273 eff_waves = filters.eff_wave(bands) 1274 is_infrared = any([(conversions.Unit(*infrared)<conversions.Unit(eff_wave,'AA')) for eff_wave in eff_waves]) 1275 else: 1276 is_infrared = conversions.Unit(*infrared)<conversions.Unit(meas['cwave'],'AA') 1277 if is_infrared and meas['color']: 1278 self.master['include'][i] = False 1279 #-- if measurement is in the infrared and it is not color, keep it (only use absolute values in IR) 1280 elif is_infrared: 1281 self.master['include'][i] = True 1282 #-- if measurement is not in the infrared and it is a color, keep it (only use colors in visible) 1283 elif not is_infrared and meas['color']: 1284 self.master['include'][i] = True 1285 #-- if measurement is not in the infrared and it is not a color, remove it (only use colors in visible) 1286 elif not is_infrared: 1287 self.master['include'][i] = False 1288 use_colors = sum(self.master['include'] & self.master['color']) 1289 use_abs = sum(self.master['include'] & -self.master['color']) 1290 logger.info('Fitting procedure will use colors (%d) < %.3g%s < absolute values (%d)'%(use_colors,infrared[0],infrared[1],use_abs))
1291 1292
1293 - def add_photometry_fromarrays(self,meas,e_meas,units,photbands,source,flags=None):
1294 """ 1295 Add photometry from numpy arrays 1296 1297 By default unflagged, and at the ra and dec of the star. No color and 1298 included. 1299 1300 @param meas: original measurements (fluxes, magnitudes...) 1301 @type meas: array 1302 @param e_meas: error on original measurements in same units as C{meas} 1303 @type e_meas: array 1304 @param units: units of original measurements 1305 @type units: array of strings 1306 @param photbands: photometric passbands of original measurements 1307 @type photbands: array of strings 1308 @param source: source of original measurements 1309 @type source: array of strings 1310 """ 1311 if flags is None: 1312 flags = np.nan*np.ones(len(meas)) 1313 #-- if master record array does not exist, make a new one 1314 if not hasattr(self,'master') or self.master is None: 1315 dtypes = [('meas','f8'),('e_meas','f8'),('flag','S20'),('unit','S30'),('photband','S30'),('source','S50'),('_r','f8'),('_RAJ2000','f8'),\ 1316 ('_DEJ2000','f8'),('cwave','f8'),('cmeas','f8'),('e_cmeas','f8'),('cunit','S50'),('color',bool),('include',bool),\ 1317 ('bibcode','S20'),('comments','S200')] 1318 logger.info('No previous measurements available, initialising master record') 1319 self.master = np.rec.fromarrays(np.array([ [] for i in dtypes]), dtype=dtypes) 1320 _to_unit = 'erg/s/cm2/AA' 1321 else: 1322 _to_unit = self.master['cunit'][0] 1323 extra_master = np.zeros(len(meas),dtype=self.master.dtype) 1324 1325 for i,(m,e_m,u,p,s,f) in enumerate(zip(meas,e_meas,units,photbands,source,flags)): 1326 photsys,photband = p.split('.') 1327 if filters.is_color(p): 1328 to_unit = 'flux_ratio' 1329 color = True 1330 else: 1331 to_unit = _to_unit 1332 color = False 1333 if e_m>0: 1334 cm,e_cm = conversions.convert(u,to_unit,m,e_m,photband=p) 1335 else: 1336 cm,e_cm = conversions.convert(u,to_unit,m,photband=p),np.nan 1337 eff_wave = filters.eff_wave(p) 1338 extra_master['cmeas'][i] = cm 1339 extra_master['e_cmeas'][i] = e_cm 1340 extra_master['cwave'][i] = eff_wave 1341 extra_master['cunit'][i] = to_unit 1342 extra_master['color'][i] = color 1343 extra_master['include'][i] = True 1344 extra_master['meas'][i] = meas[i] 1345 extra_master['e_meas'][i] = e_meas[i] 1346 extra_master['unit'][i] = units[i] 1347 extra_master['photband'][i] = photbands[i] 1348 extra_master['source'][i] = source[i] 1349 extra_master['flag'][i] = flags[i] 1350 if 'bibcode' in extra_master.dtype.names: 1351 extra_master['bibcode'][i] = '-' 1352 if 'comments' in extra_master.dtype.names: 1353 extra_master['comments'][i] = '-' 1354 1355 logger.info('Original measurements:\n%s'%(photometry2str(self.master))) 1356 logger.info('Appending:\n%s'%(photometry2str(extra_master))) 1357 self.master = fix_master(np.hstack([self.master,extra_master]),e_default=0.1) 1358 #self.master = np.hstack([self.master,extra_array]) 1359 logger.info('Final measurements:\n%s'%(photometry2str(self.master)))
1360 1361 #} 1362 #{ Additional information 1363
1364 - def is_target(self,name):
1365 """ 1366 Check if this SED represents the object `name'. 1367 1368 Purpose: solve alias problems. Maybe the ID is 'HD129929', and you are 1369 checking for "V* V836 Cen", which is the same target. 1370 1371 @param name: object name 1372 @type name: str 1373 @return: True if this object instance represent the target "name". 1374 @rtype: bool 1375 """ 1376 try: 1377 info = sesame.search(name) 1378 oname = info['oname'] 1379 except: 1380 logger.warning('Did not find {:s} on Simbad}'.format(name)) 1381 return False 1382 if oname==self.info['oname']: 1383 return True
1384
1385 - def has_photfile(self):
1386 """ 1387 Check if this SED has a phot file. 1388 1389 @return: True if this object instance has a photfile 1390 @rtype: bool 1391 """ 1392 return os.path.isfile(self.photfile)
1393
1394 - def get_distance_from_plx(self,plx=None,lutz_kelker=True,unit='pc'):
1395 """ 1396 Get the probability density function for the distance, given a parallax. 1397 1398 If no parallax is given, the catalogue value will be used. If that one 1399 is also not available, a ValueError will be raised. 1400 1401 Parallax must be given in mas. 1402 1403 If the parallax is a float without uncertainty, an uncertainty of 0 will 1404 be assumed. 1405 1406 If C{lutz_kelker=True}, a probability density function will be given for 1407 the distance, otherwise the distance is computed from simply inverting 1408 the parallax. 1409 1410 Distance is returned in parsec (pc). 1411 1412 @return: distance 1413 @rtype: (float,float) 1414 """ 1415 #-- we need parallax and galactic position 1416 if plx is None and 'plx' in self.info: 1417 plx = self.info['plx']['v'],self.info['plx']['e'] 1418 elif plx is None: 1419 raise ValueError('distance cannot be computed from parallax (no parallax)') 1420 if not 'galpos' in self.info: 1421 raise ValueError('distance cannot be computed from parallax (no position)') 1422 gal = self.info['galpos'] 1423 #-- if the parallax has an uncertainty and the Lutz-Kelker bias needs 1424 # to be taken into account, compute probability density function 1425 if lutz_kelker and hasattr(plx,'__iter__'): 1426 d = np.logspace(np.log10(0.1),np.log10(25000),100000) 1427 dprob = distance.distprob(d,gal[1],plx) 1428 dprob = dprob / dprob.max() 1429 logger.info('Computed distance to target with Lutz-Kelker bias') 1430 return d,dprob 1431 #-- just invert (with or without error) 1432 else: 1433 dist = (conversions.Unit(plx,'kpc-1')**-1).convert(unit) 1434 logger.info('Computed distance to target via parallax inversion: {:s}'.format(dist)) 1435 return dist.get_value()
1436
1437 - def get_interstellar_reddening(self,distance=None, Rv=3.1):
1438 """ 1439 Under construction. 1440 """ 1441 gal = self.info['galpos'] 1442 if distance is None: 1443 distance = self.get_distance_from_plx(lutz_kelker=False,unit='pc')[0] 1444 output = {} 1445 for model in ['arenou','schlegel','drimmel','marshall']: 1446 ext = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance) 1447 if ext is not None: 1448 output[model] = ext/Rv 1449 return output
1450
1451 - def get_angular_diameter(self):
1452 """ 1453 Under construction. 1454 """ 1455 raise NotImplementedError
1456
1457 - def compute_distance(self,mtype='igrid_search'):
1458 """ 1459 Compute distance from radius and angular diameter (scale factor). 1460 1461 The outcome is added to the C{results} attribute:: 1462 1463 distance = r/sqrt(scale) 1464 1465 This particularly useful when you added constraints from solar-like 1466 oscillations (L{add_constraint_slo}). 1467 1468 @return: distance,uncertainty (pc) 1469 @rtype: (float,float) 1470 """ 1471 grid = self.results[mtype]['grid'] 1472 radius = grid['radius'] 1473 e_radius = grid['e_radius'] 1474 scale = grid['scale'] 1475 e_scale = grid['escale'] 1476 1477 radius = conversions.unumpy.uarray([radius,e_radius]) 1478 scale = conversions.unumpy.uarray([scale,e_scale]) 1479 1480 distance = radius/scale**0.5 1481 distance = conversions.convert('Rsol','pc',distance) 1482 distance,e_distance = conversions.unumpy.nominal_values(distance),\ 1483 conversions.unumpy.std_devs(distance) 1484 self.results[mtype]['grid'] = pl.mlab.rec_append_fields(grid,\ 1485 ['distance','e_distance'],[distance,e_distance]) 1486 d,e_d = distance.mean(),distance.std() 1487 logger.info("Computed distance from R and scale: {0}+/-{1} pc".format(d,e_d)) 1488 return d,e_d
1489 1490 1491 1492 1493 #} 1494
1495 - def generate_ranges(self,type='single',start_from='igrid_search',distribution='uniform',**kwargs):
1496 """ 1497 Generate sensible search range for each parameter. 1498 """ 1499 limits = {} 1500 exist_previous = (start_from in self.results and 'CI' in self.results[start_from]) 1501 1502 #-- run over all parnames, and generate the parameters ranges for each 1503 # component: 1504 # (1) if a previous parameter search is done and the user has not set 1505 # the parameters range, derive a sensible parameter range from the 1506 # previous results. 1507 # (2) if no previous search is done, use infinite upper and lower bounds 1508 # (3) if ranges are given, use those. Cycle over them, and use the 1509 # strategy from (1) if no parameter ranges are given for a specific 1510 # component. 1511 for par_range_name in kwargs: 1512 #-- when "teffrange" is given, par_range will be the value of the 1513 # keyword "teffrange", and parname will be "teff". 1514 parname = par_range_name.rsplit('range')[0] 1515 parrange = kwargs.get(par_range_name,None) 1516 #-- the three cases described above: 1517 if exist_previous and parrange is None: 1518 lkey = parname+'_l' 1519 ukey = parname+'_u' 1520 #-- if the parameters was not used in the fit, stick to the 1521 # default given value 1522 if not lkey in self.results[start_from]['CI']: 1523 parrange = kwargs[par_range_name] 1524 #-- else we can derive a better parameter range 1525 else: 1526 parrange = (self.results[start_from]['CI'][lkey], 1527 self.results[start_from]['CI'][ukey]) 1528 elif parrange is None: 1529 parrange = (-np.inf,np.inf) 1530 1531 #-- now, the ranges are (lower,upper) for the uniform distribution, 1532 # and (mu,scale) for the normal distribution 1533 if distribution=='normal': 1534 parrange = ((i[1]-i[0])/2.,(i[1]-i[0])/6.) 1535 elif distribution!='uniform': 1536 raise NotImplementedError, 'Any distribution other than "uniform" and "normal" has not been implemented yet!' 1537 limits[par_range_name] = parrange 1538 #-- this returns the kwargs but with filled in limits, and confirms 1539 # the type if it was given, or gives the type when it needed to be derived 1540 logger.info('Parameter ranges calculated starting from {0:s} and using distribution {1:s}.'.format(start_from,distribution)) 1541 return limits
1542 1543 #{ Fitting routines 1544
1545 - def clip_grid(self,mtype='igrid_search',CI_limit=None):
1546 """ 1547 Clip grid on CI limit, to save memory. 1548 1549 @param mtype: type or results to clip 1550 @type mtype: str 1551 @param CI_limit: confidence limit to clip on 1552 @type CI_limit: float (between 0 (clips everything) and 1 (clips nothing)) 1553 """ 1554 if CI_limit is None: 1555 CI_limit = self.CI_limit 1556 new_grid = self.results[mtype]['grid'] 1557 new_grid = new_grid[new_grid['ci_red']<=CI_limit] 1558 self.results[mtype]['grid'] = new_grid 1559 logger.info("Clipped grid at {:.6f}%".format(CI_limit*100))
1560
1561 - def collect_results(self, grid=None, fitresults=None, mtype='igrid_search', selfact='chisq', **kwargs):
1562 """creates a record array of all fit results and removes the failurs""" 1563 1564 gridnames = sorted(grid.keys()) 1565 fitnames = sorted(fitresults.keys()) 1566 pardtypes = [(name,'f8') for name in gridnames]+[(name,'f8') for name in fitnames] 1567 pararrays = [grid[name] for name in gridnames]+[fitresults[name] for name in fitnames] 1568 1569 grid_results = np.rec.fromarrays(pararrays,dtype=pardtypes) 1570 1571 #-- exclude failures 1572 failures = np.isnan(grid_results[selfact]) 1573 if sum(failures): 1574 logger.info('Excluded {0} failed results (nan)'.format(sum(failures))) 1575 grid_results = grid_results[-failures] 1576 1577 #-- make room for chi2 statistics 1578 grid_results = mlab.rec_append_fields(grid_results, 'ci_raw', np.zeros(len(grid_results))) 1579 grid_results = mlab.rec_append_fields(grid_results, 'ci_red', np.zeros(len(grid_results))) 1580 1581 #-- take the previous results into account if they exist: 1582 if not mtype in self.results: 1583 self.results[mtype] = {} 1584 elif 'grid' in self.results['igrid_search']: 1585 logger.info('Appending previous results ({:d}+{:d})'.format(len(self.results[mtype]['grid']),len(grid_results))) 1586 ex_names = grid_results.dtype.names 1587 ex_grid = np.rec.fromarrays([self.results[mtype]['grid'][exname] for exname in ex_names], 1588 names=ex_names) 1589 grid_results = np.hstack([ex_grid,grid_results]) 1590 1591 #-- inverse sort according to chisq: this means the best models are at the end 1592 # (mainly for plotting reasons, so that the best models are on top). 1593 sa = np.argsort(grid_results[selfact])[::-1] 1594 grid_results = grid_results[sa] 1595 1596 self.results[mtype]['grid'] = grid_results 1597 1598 logger.info('Total of %d grid points, best chisq=%s'%(len(grid_results), grid_results[-1]))
1599
1600 - def calculateDF(self, **ranges):
1601 """ Calculates the degrees of freedom from the given ranges""" 1602 1603 df, df_info = 1, ['theta'] 1604 for range_name in ranges: 1605 if re.search('ebv\d?range$', range_name): 1606 if not 'ebv' in df_info: 1607 df += 1 1608 df_info.append('ebv') 1609 else: 1610 continue 1611 elif not np.allclose(ranges[range_name][0],ranges[range_name][1]): 1612 df += 1 1613 df_info.append(range_name[0:-5]) 1614 df_info.sort() 1615 logger.info('Degrees of freedom = {} ({})'.format(df,', '.join(df_info))) 1616 1617 return df, df_info
1618
1619 - def calculate_statistics(self, df=None, ranges=None, mtype='igrid_search', selfact='chisq'):
1620 """ 1621 Calculates the Chi2 and reduced Chi2 based on nr of observations and degrees of 1622 freedom. If df is not provided, tries to calculate df from the fitting ranges. 1623 """ 1624 1625 #-- If nessessary calculate degrees of freedom from the ranges 1626 if df == None and ranges != None: 1627 df,df_info = self.calculateDF(**ranges) 1628 elif df == None: 1629 logger.warning('Cannot compute degrees of freedom!!! CHI2 might not make sense. (using df=5)') 1630 df = 5 1631 1632 #-- Do the statistics 1633 N = sum(self.master['include']) 1634 k = N-df 1635 if k<=0: 1636 logger.warning('Not enough data to compute CHI2: it will not make sense') 1637 k = 1 1638 logger.info('Calculated statistics based on df={0} and Nobs={1}'.format(df,N)) 1639 1640 #-- Rescale if needed and compute confidence intervals 1641 results = self.results[mtype]['grid'] 1642 factor = max(results[selfact][-1]/k,1) 1643 logger.warning('CHI2 rescaling factor equals %g'%(factor)) 1644 results['ci_raw'] = scipy.stats.distributions.chi2.cdf(results[selfact],k) 1645 results['ci_red'] = scipy.stats.distributions.chi2.cdf(results[selfact]/factor,k) 1646 1647 #-- Store the results 1648 self.results[mtype]['grid'] = results 1649 self.results[mtype]['factor'] = factor
1650
1651 - def calculate_confidence_intervals(self,mtype='igrid_search',chi2_type='red',CI_limit=None):
1652 """ 1653 Compute confidence interval of all columns in the results grid. 1654 1655 @param mtype: type of results to compute confidence intervals of 1656 @type mtype: str 1657 @param chi2_type: type of chi2 (raw or reduced) 1658 @type chi2_type: str ('raw' or 'red') 1659 @param CI_limit: confidence limit to clip on 1660 @type CI_limit: float (between 0 (clips everything) and 1 (clips nothing)) 1661 """ 1662 #-- get some info 1663 grid_results = self.results[mtype]['grid'] 1664 if CI_limit is None or CI_limit > 1.0: 1665 CI_limit = self.CI_limit 1666 #-- the chi2 grid is ordered: where is the value closest to the CI limit? 1667 region = self.results[mtype]['grid']['ci_'+chi2_type]<=CI_limit 1668 if sum(region)==0: 1669 raise ValueError("No models in the sample have a chi2_{} below the limit {}. Try increasing the number of models, increasing the CI_limit or choose different photometry.".format(chi2_type,CI_limit)) 1670 #-- now compute the confidence intervals 1671 cilow, cihigh, value = [],[],[] 1672 for name in grid_results.dtype.names: 1673 cilow.append(grid_results[name][region].min()) 1674 value.append(grid_results[name][-1]) 1675 cihigh.append(grid_results[name][region].max()) 1676 1677 return dict(name=grid_results.dtype.names, value=value, cilow=cilow, cihigh=cihigh)
1678
1679 - def store_confidence_intervals(self, mtype='igrid_search', name=None, value=None, cilow=None, \ 1680 cihigh=None, **kwargs):
1681 """ 1682 Saves the provided confidence intervals in the result dictionary of self. 1683 The provided confidence intervals will be saved in a dictionary: 1684 self.results[mtype]['CI'] with as key for the value the name, for cilow: 1685 name_l and for cihigh: name_u. 1686 1687 @param mtype: the search type 1688 @type mtype: str 1689 @param name: names of the parameters 1690 @type name: array 1691 @param value: best fit values 1692 @type value: array 1693 @param cilow: lower CI limit 1694 @type cilow: array 1695 @param cihigh: upper CI limit 1696 @type cihigh: array 1697 """ 1698 if not 'CI' in self.results[mtype]: 1699 self.results[mtype]['CI'] = {} 1700 for name, val, cil, cih in zip(name, value, cilow, cihigh): 1701 self.results[mtype]['CI'][name+'_l'] = cil 1702 self.results[mtype]['CI'][name] = val 1703 self.results[mtype]['CI'][name+'_u'] = cih 1704 1705 # Send the stored CI to logger 1706 logger.info(self.ci2str(mtype=mtype))
1707
1708 - def igrid_search(self,points=100000,teffrange=None,loggrange=None,ebvrange=None, 1709 zrange=(0,0),rvrange=(3.1,3.1),vradrange=(0,0), 1710 df=None,CI_limit=None,set_model=True,**kwargs):
1711 """ 1712 Fit fundamental parameters using a (pre-integrated) grid search. 1713 1714 If called consecutively, the ranges will be set to the CI_limit of previous 1715 estimations, unless set explicitly. 1716 1717 If called for the first time, the ranges will be +/- np.inf by defaults, 1718 unless set explicitly. 1719 """ 1720 if CI_limit is None or CI_limit > 1.0: 1721 CI_limit = self.CI_limit 1722 1723 #-- set defaults limits 1724 ranges = self.generate_ranges(teffrange=teffrange,\ 1725 loggrange=loggrange,ebvrange=ebvrange,\ 1726 zrange=zrange,rvrange=rvrange,vradrange=vradrange) 1727 1728 #-- grid search on all include data: extract the best CHI2 1729 include_grid = self.master['include'] 1730 logger.info('The following measurements are included in the fitting process:\n%s'%(photometry2str(self.master[include_grid]))) 1731 1732 #-- build the grid, run over the grid and calculate the CHI2 1733 pars = fit.generate_grid_pix(self.master['photband'][include_grid],points=points,**ranges) 1734 chisqs,scales,e_scales,lumis = fit.igrid_search_pix(self.master['cmeas'][include_grid], 1735 self.master['e_cmeas'][include_grid], 1736 self.master['photband'][include_grid],**pars) 1737 fitres = dict(chisq=chisqs, scale=scales, escale=e_scales, labs=lumis) 1738 1739 #-- collect all the results in a record array 1740 self.collect_results(grid=pars, fitresults=fitres, mtype='igrid_search') 1741 1742 #-- do the statistics 1743 self.calculate_statistics(df=df, ranges=ranges, mtype='igrid_search') 1744 1745 #-- compute the confidence intervals 1746 ci = self.calculate_confidence_intervals(mtype='igrid_search',chi2_type='red',CI_limit=CI_limit) 1747 self.store_confidence_intervals(mtype='igrid_search', **ci) 1748 1749 #-- remember the best model 1750 if set_model: self.set_best_model()
1751
1752 - def generate_fit_param(self, start_from='igrid_search', **pars):
1753 """ 1754 generates a dictionary with parameter information that can be handled by fit.iminimize 1755 """ 1756 result = dict() 1757 for key in pars.keys(): 1758 if re.search("range$", key): 1759 result[key[0:-5]+"_min"] = pars[key][0] 1760 result[key[0:-5]+"_max"] = pars[key][1] 1761 #-- if min == max: fix the parameter and force value = min 1762 if np.allclose([pars[key][0]], [pars[key][1]]): 1763 result[key[0:-5]+"_vary"] = False 1764 result[key[0:-5]+"_value"] = pars[key][0] 1765 else: 1766 result[key[0:-5]+"_vary"] = True 1767 else: 1768 #-- Store the startvalue. If None, look in start_from for a new startvalue. 1769 if pars[key] != None and not key+"_value" in result: 1770 result[key+"_value"] = pars[key] 1771 elif pars[key] == None and not key+"_value" in result: 1772 result[key+"_value"] = self.results[start_from]['CI'][key] 1773 1774 return result
1775
1776 - def calculate_iminimize_CI(self, mtype='iminimize', CI_limit=0.66, **kwargs):
1777 1778 #-- Get the best fit parameters and ranges 1779 pars = {} 1780 skip = ['scale', 'chisq', 'nfev', 'labs', 'ci_raw', 'ci_red', 'scale', 'escale'] 1781 for name in self.results[mtype]['CI'].keys(): 1782 name = re.sub('_[u|l]$', '', name) 1783 if not name in pars and not name in skip: 1784 pars[name] = self.results[mtype]['CI'][name] 1785 pars[name+"range"] = [self.results[mtype]['CI'][name+"_l"],\ 1786 self.results[mtype]['CI'][name+"_u"]] 1787 pars.update(kwargs) 1788 pars = self.generate_fit_param(**pars) 1789 1790 #-- calculate the confidence intervalls 1791 include_grid = self.master['include'] 1792 ci = fit.calculate_iminimize_CI(self.master['cmeas'][include_grid], 1793 self.master['e_cmeas'][include_grid], 1794 self.master['photband'][include_grid], 1795 CI_limit=CI_limit,constraints=self.constraints, **pars) 1796 1797 #-- Add the scale factor 1798 ci['name'] = np.append(ci['name'], 'scale') 1799 ci['value'] = np.append(ci['value'], self.results[mtype]['grid']['scale'][-1]) 1800 ci['cilow'] = np.append(ci['cilow'], min(self.results[mtype]['grid']['scale'])) 1801 ci['cihigh'] = np.append(ci['cihigh'], max(self.results[mtype]['grid']['scale'])) 1802 1803 logger.info('Calculated %s%% confidence intervalls for all parameters'%(CI_limit)) 1804 1805 self.store_confidence_intervals(mtype='iminimize', **ci)
1806
1807 - def calculate_iminimize_CI2D(self,xpar, ypar, mtype='iminimize', limits=None, res=10, **kwargs):
1808 1809 #-- get the best fitting parameters 1810 pars = {} 1811 skip = ['scale', 'chisq', 'nfev', 'labs', 'ci_raw', 'ci_red', 'scale', 'escale'] 1812 for name in self.results[mtype]['CI'].keys(): 1813 name = re.sub('_[u|l]$', '', name) 1814 if not name in pars and not name in skip: 1815 pars[name] = self.results[mtype]['CI'][name] 1816 pars[name+"range"] = [self.results[mtype]['CI'][name+"_l"],\ 1817 self.results[mtype]['CI'][name+"_u"]] 1818 pars.update(kwargs) 1819 pars = self.generate_fit_param(**pars) 1820 1821 logger.info('Calculating 2D confidence intervalls for %s--%s'%(xpar,ypar)) 1822 1823 #-- calculate the confidence intervalls 1824 include_grid = self.master['include'] 1825 x,y,chi2, raw, red = fit.calculate_iminimize_CI2D(self.master['cmeas'][include_grid], 1826 self.master['e_cmeas'][include_grid], 1827 self.master['photband'][include_grid], 1828 xpar, ypar, limits=limits, res=res, 1829 constraints=self.constraints, **pars) 1830 1831 #-- store the CI 1832 if not 'CI2D' in self.results[mtype]: self.results[mtype]['CI2D'] = {} 1833 self.results[mtype]['CI2D'][xpar+"-"+ypar] = dict(x=x, y=y, ci_chi2=chi2,\ 1834 ci_raw=raw, ci_red=red)
1835 1836
1837 - def _get_imin_ci(self, mtype='iminimize',**ranges):
1838 """ returns ci information for store_confidence_intervals """ 1839 names, values, cil, cih = [],[],[],[] 1840 for key in ranges.keys(): 1841 name = key[0:-5] 1842 names.append(name) 1843 values.append(self.results[mtype]['grid'][name][-1]) 1844 cil.append(ranges[key][0]) 1845 cih.append(ranges[key][1]) 1846 names.append('scale') 1847 values.append(self.results[mtype]['grid']['scale'][-1]) 1848 cil.append(min(self.results[mtype]['grid']['scale'])) 1849 cih.append(max(self.results[mtype]['grid']['scale'])) 1850 return dict(name=names, value=values, cilow=cil, cihigh=cih)
1851
1852 - def iminimize(self, teff=None, logg=None, ebv=None, z=0, rv=3.1, vrad=0, teffrange=None, 1853 loggrange=None, ebvrange=None, zrange=None, rvrange=None, vradrange=None, 1854 points=None, distance=None, start_from='igrid_search',df=None, CI_limit=None, 1855 calc_ci=False, set_model=True, **kwargs):
1856 """ 1857 Basic minimizer method for SED fitting implemented using the lmfit library from sigproc.fit 1858 """ 1859 1860 #-- set defaults limits and starting values 1861 ranges = self.generate_ranges(teffrange=teffrange,loggrange=loggrange,\ 1862 ebvrange=ebvrange,zrange=zrange,rvrange=rvrange,\ 1863 vradrange=vradrange, start_from=start_from) 1864 pars = self.generate_fit_param(teff=teff, logg=logg, ebv=ebv, z=z, \ 1865 rv=rv, vrad=vrad, start_from=start_from, **ranges) 1866 fitkws = dict(distance=distance) if distance != None else dict() 1867 1868 #-- grid search on all include data: extract the best CHI2 1869 include_grid = self.master['include'] 1870 logger.info('The following measurements are included in the fitting process:\n%s'% \ 1871 (photometry2str(self.master[include_grid]))) 1872 1873 #-- pass all ranges and starting values to the fitter 1874 grid, chisq, nfev, scale, lumis = fit.iminimize(self.master['cmeas'][include_grid], 1875 self.master['e_cmeas'][include_grid], 1876 self.master['photband'][include_grid], 1877 fitkws=fitkws, points=points,**pars) 1878 1879 logger.info('Minimizer Succes with startpoints=%s, chi2=%s, nfev=%s'%(len(chisq), chisq[0], nfev[0])) 1880 #-- handle the results 1881 fitres = dict(chisq=chisq, nfev=nfev, scale=scale, labs=lumis) 1882 self.collect_results(grid=grid, fitresults=fitres, mtype='iminimize', selfact='chisq') 1883 1884 #-- Do the statistics 1885 self.calculate_statistics(df=5, ranges=ranges, mtype='iminimize', selfact='chisq') 1886 1887 #-- Store the confidence intervals 1888 ci = self._get_imin_ci(mtype='iminimize',**ranges) 1889 self.store_confidence_intervals(mtype='iminimize', **ci) 1890 if calc_ci: 1891 self.calculate_iminimize_CI(mtype='iminimize', CI_limit=CI_limit) 1892 1893 #-- remember the best model 1894 if set_model: self.set_best_model(mtype='iminimize')
1895 1896
1897 - def imc(self,teffrange=None,loggrange=None,ebvrange=None,zrange=None,start_from='imc',\ 1898 distribution='uniform',points=None,fitmethod='fmin',disturb=True):
1899 1900 limits,type = self.generate_ranges(teffrange=teffrange,loggrange=loggrange,\ 1901 ebvrange=ebvrange,zrange=zrange,distribution=distribution,\ 1902 start_from='imc') 1903 1904 #-- grid search on all include data: extract the best CHI2 1905 include = self.master['include'] 1906 meas = self.master['cmeas'][include] 1907 if disturb: 1908 emeas = self.master['e_cmeas'][include] 1909 else: 1910 emeas = self.master['e_cmeas'][include]*1e-6 1911 photbands = self.master['photband'][include] 1912 logger.info('The following measurements are included in the fitting process:\n%s'%(photometry2str(self.master[include]))) 1913 1914 #-- generate initial guesses 1915 teffs,loggs,ebvs,zs,radii = fit.generate_grid(self.master['photband'][include],type=type,points=points+25,**limits) 1916 NrPoints = len(teffs)>points and points or len(teffs) 1917 firstoutput = np.zeros((len(teffs)-NrPoints,9)) 1918 output = np.zeros((NrPoints,9)) 1919 1920 #-- fit the original data a number of times 1921 for i,(teff,logg,ebv,z) in enumerate(zip(teffs[NrPoints:],loggs[NrPoints:],ebvs[NrPoints:],zs[NrPoints:])): 1922 try: 1923 fittedpars,warnflag = fit.iminimize2(meas,emeas,photbands,teff,logg,ebv,z,fitmethod=fitmethod) 1924 firstoutput[i,1:] = fittedpars 1925 firstoutput[i,0] = warnflag 1926 except IOError: 1927 firstoutput[i,0] = 3 1928 1929 logger.info("{0}/{1} fits on original data failed (max func call)".format(sum(firstoutput[:,0]==1),firstoutput.shape[0])) 1930 logger.info("{0}/{1} fits on original failed (max iter)".format(sum(firstoutput[:,0]==2),firstoutput.shape[0])) 1931 logger.info("{0}/{1} fits on original data failed (outside of grid)".format(sum(firstoutput[:,0]==3),firstoutput.shape[0])) 1932 1933 #-- retrieve the best fitting result and make it the first entry of output 1934 keep = (firstoutput[:,0]==0) & (firstoutput[:,1]>0) 1935 best = firstoutput[keep,-2].argmin() 1936 output[-1,:] = firstoutput[keep][best,:] 1937 1938 # calculate the factor with which to multiply the scale 1939 #factor = np.sqrt(output[-1,5]/len(meas)) 1940 #print factor 1941 1942 #-- now do the actual Monte Carlo simulation 1943 for i,(teff,logg,ebv,z) in enumerate(zip(teffs[:NrPoints-1],loggs[:NrPoints-1],ebvs[:NrPoints-1],zs[:NrPoints-1])): 1944 newmeas = meas + np.random.normal(scale=emeas) #*factor) 1945 try: 1946 fittedpars,warnflag = fit.iminimize2(newmeas,emeas,photbands,teff,logg,ebv,z,fitmethod=fitmethod) 1947 output[i,1:] = fittedpars 1948 output[i,0] = warnflag 1949 except IOError: 1950 output[i,0] = 3 1951 1952 logger.info("{0}/{1} MC simulations failed (max func call)".format(sum(output[:,0]==1),NrPoints)) 1953 logger.info("{0}/{1} MC simulations failed (max iter)".format(sum(output[:,0]==2),NrPoints)) 1954 logger.info("{0}/{1} MC simulations failed (outside of grid)".format(sum(output[:,0]==3),NrPoints)) 1955 1956 #-- remove nonsense results 1957 keep = (output[:,0]==0) & (output[:,1]>0) 1958 output = output[keep] 1959 output = np.rec.fromarrays(output[:,1:-1].T,names=['teff','logg','ebv','z','labs','chisq','scale']) 1960 #-- derive confidence intervals and median values 1961 #print np.median(output[:,1:],axis=0) 1962 1963 self.results['imc'] = {} 1964 self.results['imc']['CI'] = {} 1965 self.results['imc']['grid'] = output 1966 CI_limit = 0.95 1967 for name in output.dtype.names: 1968 sortarr = np.sort(output[name]) 1969 trimarr = scipy.stats.trimboth(sortarr,(1-CI_limit)/2.) # trim 2.5% of top and bottom, to arrive at 95% CI 1970 self.results['imc']['CI'][name+'_l'] = trimarr.min() 1971 self.results['imc']['CI'][name] = output[name][-1]#np.median(output[name]) 1972 self.results['imc']['CI'][name+'_u'] = trimarr.max() 1973 logger.info('%i%% CI %s: %g <= %g <= %g'%(CI_limit*100,name,self.results['imc']['CI'][name+'_l'], 1974 self.results['imc']['CI'][name], 1975 self.results['imc']['CI'][name+'_u'])) 1976 self.set_best_model(mtype='imc')
1977 1978 1979 #} 1980 1981 #{ Interfaces 1982
1983 - def clear(self):
1984 """ 1985 Clear the results. 1986 """ 1987 self.results = {}
1988
1989 - def set_best_model(self,mtype='igrid_search',law='fitzpatrick2004',**kwargs):
1990 """ 1991 Get reddenend and unreddened model 1992 """ 1993 logger.info('Interpolating approximate full SED of best model') 1994 1995 #-- synthetic flux 1996 include = self.master['include'] 1997 synflux = np.zeros(len(self.master['photband'])) 1998 keep = (self.master['cwave']<1.6e6) | np.isnan(self.master['cwave']) 1999 keep = keep & include 2000 2001 if mtype in ['igrid_search', 'iminimize']: 2002 #-- get the metallicity right 2003 files = model.get_file(z='*') 2004 if type(files) == str: files = [files] #files needs to be a list! 2005 metals = np.array([pyfits.getheader(ff)['Z'] for ff in files]) 2006 metals = metals[np.argmin(np.abs(metals-self.results[mtype]['CI']['z']))] 2007 scale = self.results[mtype]['CI']['scale'] 2008 #-- get (approximated) reddened and unreddened model 2009 wave,flux = model.get_table(teff=self.results[mtype]['CI']['teff'], 2010 logg=self.results[mtype]['CI']['logg'], 2011 ebv=self.results[mtype]['CI']['ebv'], 2012 z=metals, 2013 law=law) 2014 wave_ur,flux_ur = model.get_table(teff=self.results[mtype]['CI']['teff'], 2015 logg=self.results[mtype]['CI']['logg'], 2016 ebv=0, 2017 z=metals, 2018 law=law) 2019 #-- get synthetic photometry 2020 synflux_,Labs = model.get_itable(teff=self.results[mtype]['CI']['teff'], 2021 logg=self.results[mtype]['CI']['logg'], 2022 ebv=self.results[mtype]['CI']['ebv'], 2023 z=self.results[mtype]['CI']['z'], 2024 photbands=self.master['photband'][keep]) 2025 2026 flux,flux_ur = flux*scale,flux_ur*scale 2027 2028 synflux[keep] = synflux_ 2029 2030 #synflux,Labs = model.get_itable(teff=self.results[mtype]['CI']['teff'], 2031 # logg=self.results[mtype]['CI']['logg'], 2032 # ebv=self.results[mtype]['CI']['ebv'], 2033 # photbands=self.master['photband']) 2034 synflux[-self.master['color']] *= scale 2035 chi2 = (self.master['cmeas']-synflux)**2/self.master['e_cmeas']**2 2036 #-- calculate effective wavelengths of the photometric bands via the model 2037 # values 2038 eff_waves = filters.eff_wave(self.master['photband'],model=(wave,flux)) 2039 self.results[mtype]['model'] = wave,flux,flux_ur 2040 self.results[mtype]['synflux'] = eff_waves,synflux,self.master['photband'] 2041 self.results[mtype]['chi2'] = chi2
2042
2043 - def set_model(self,wave,flux,label='manual'):
2044 """ 2045 Manually set the best SED model. 2046 2047 This is particularly useful when working with calibrators for which there 2048 is an observed SED. 2049 2050 The label will be used to store the model in the C{results} attribute. 2051 2052 @param wave: wavelength array (angstrom) 2053 @type wave: ndarray 2054 @param flux: flux array (erg/s/cm2/AA) 2055 @type flux: ndarray 2056 @param label: key used to store the model and synthetic photometry in C{results} 2057 @type label:s str 2058 """ 2059 #-- necessary information 2060 photbands = self.master['photband'] 2061 is_color = self.master['color'] 2062 include = self.master['include'] 2063 synflux = np.zeros(len(photbands)) 2064 2065 #-- compute synthetic photometry 2066 synflux[(-is_color) & include] = model.synthetic_flux(wave,flux,photbands[(-is_color) & include]) 2067 synflux[is_color & include] = model.synthetic_color(wave,flux,photbands[is_color & include]) 2068 chi2 = (self.master['cmeas']-synflux)**2/self.master['e_cmeas']**2 2069 eff_waves = filters.eff_wave(photbands) 2070 2071 if not label in self.results: 2072 self.results[label] = {} 2073 self.results[label]['model'] = wave,flux,flux 2074 self.results[label]['synflux'] = eff_waves,synflux,photbands 2075 self.results[label]['chi2'] = chi2 2076 logger.debug('Stored model SED in {}'.format(label))
2077
2078 - def get_model(self,label='igrid_search'):
2079 """ 2080 Retrieve the best SED model. 2081 2082 B{Warning}: the grid search interpolation is also done with interpolation 2083 in metallicity, while this is not the case for the best full SED model. 2084 Interpolation of the full SED is only done in teff/logg, and the 2085 metallicity is chosen to be equal to the closest grid point. On the 2086 other hand, while the reddening value is interpolated in the grid search, 2087 this is not the case for the best model (the model is simply reddened 2088 according the found best value). So don't be surprised if you find 2089 differences between the values of C{self.results[label]['synflux']}, 2090 which are the value sfor the interpolated photometric points of the 2091 best model, and the synthetic photometry obtained by manual integration 2092 of the returned full SED model. Those differences should be incredible 2093 small and mainly due to the metallicity. 2094 """ 2095 wave,flux,urflux = self.results[label]['model'] 2096 return wave,flux,urflux
2097
2098 - def sample_gridsearch(self,NrSamples=1,df=None,selfact='chisq'):
2099 """ 2100 Retrieve an element from the results of a grid search according to the derived probability. 2101 2102 An array of length "NrSamples" containing 'grid-indices' is returned, so the actual parameter values 2103 of the corresponding model can be retrieved from the results dictionary. 2104 2105 @param NrSamples: the number of samples you wish to draw 2106 @type NrSamples: int 2107 """ 2108 ranges = self.generate_ranges(teffrange=None,loggrange=None,ebvrange=None,\ 2109 zrange=None,rvrange=None,vradrange=None,start_from='igrid_search') 2110 2111 #-- If nessessary calculate degrees of freedom from the ranges 2112 if df == None and ranges != None: 2113 df,df_info = self.calculateDF(**ranges) 2114 elif df == None: 2115 logger.warning('Cannot compute degrees of freedom!!! CHI2 might not make sense. (using df=5)') 2116 df = 5 2117 2118 #-- Do the statistics 2119 N = sum(self.master['include']) 2120 k = N-df 2121 if k<=0: 2122 logger.warning('Not enough data to compute CHI2: it will not make sense') 2123 k = 1 2124 logger.info('Statistics based on df={0} and Nobs={1}'.format(df,N)) 2125 2126 #-- Compute the pdf and cdf 2127 probdensfunc = scipy.stats.distributions.chi2.pdf(self.results['igrid_search']['grid'][selfact],k) 2128 cumuldensfunc = probdensfunc.cumsum() 2129 2130 #-- Uniformly sample the cdf, to get a sampling according to the pdf 2131 sample = pl.uniform(cumuldensfunc[0],cumuldensfunc[-1],NrSamples) 2132 indices = np.zeros(NrSamples,int) 2133 for i in xrange(NrSamples): 2134 indices[i] = (abs(cumuldensfunc-sample[i])).argmin() 2135 return indices
2136
2137 - def chi2(self,select=None,reduced=False,label='igrid_search'):
2138 """ 2139 Calculate chi2 of best model. 2140 2141 TEMPORARY!!! API WILL CHANGE!!! 2142 """ 2143 #-- select photometry 2144 master = self.master.copy() 2145 keep = np.ones(len(master),bool) 2146 if isinstance(select,dict): 2147 for key in select: 2148 keep = keep & (master[key]==select[key]) 2149 master = master[keep] 2150 #-- compute synthetic phtoometry 2151 photbands = master['photband'] 2152 is_color = master['color'] 2153 synflux = np.zeros(len(photbands)) 2154 2155 wave,flux,urflux = self.get_model(label=label) 2156 synflux[-is_color] = model.synthetic_flux(wave,flux,photbands[-is_color]) 2157 synflux[is_color] = model.synthetic_color(wave,flux,photbands[is_color]) 2158 x2 = (master['cmeas']-synflux)**2/master['e_cmeas']**2 2159 x2 = x2.mean() 2160 return x2
2161 2162 #} 2163 #{ Add constraints 2164
2165 - def add_constraint_distance(self,distance=None,mtype='igrid_search',**kwargs):
2166 """ 2167 Use the distance to compute additional information. 2168 2169 Compute radii, absolute luminosities and masses, and add them to the 2170 results. 2171 2172 Extra kwargs go to L{get_distance_from_plx}. 2173 2174 B{Warning:} after calling this function, the C{labs} column in the grid 2175 is actually absolutely calibrated and reflects the true absolute 2176 luminosity instead of the absolute luminosity assuming 1 solar radius. 2177 2178 @param distance: distance in solar units and error 2179 @type distance: tuple (float,float) 2180 @param mtype: type of results to add the information to 2181 @type mtype: str 2182 """ 2183 if distance is None: 2184 kwargs['lutz_kelker'] = False # we can't handle asymmetric error bars 2185 kwargs['unit'] = 'Rsol' 2186 distance = self.get_distance_from_plx(**kwargs) 2187 2188 #-- compute the radius, absolute luminosity and mass: note that there is 2189 # an uncertainty on the scaling factor *and* on the distance! 2190 scale = conversions.unumpy.uarray([self.results[mtype]['grid']['scale'],\ 2191 self.results[mtype]['grid']['escale']]) 2192 distance = conversions.ufloat(distance) 2193 radius = distance*np.sqrt(self.results[mtype]['grid']['scale']) # in Rsol 2194 labs = self.results[mtype]['grid']['labs']*radius**2 2195 mass = conversions.derive_mass((self.results[mtype]['grid']['logg'],'[cm/s2]'),\ 2196 (radius,'Rsol'),unit='Msol') 2197 2198 #-- store the results in the grid 2199 mass,e_mass = conversions.unumpy.nominal_values(mass),\ 2200 conversions.unumpy.std_devs(mass) 2201 radius,e_radius = conversions.unumpy.nominal_values(radius),\ 2202 conversions.unumpy.std_devs(radius) 2203 labs,e_labs = conversions.unumpy.nominal_values(labs),\ 2204 conversions.unumpy.std_devs(labs) 2205 self.results[mtype]['grid']['labs'] = labs # Labs is already there, overwrite 2206 #-- check if the others are already in there or not: 2207 labels,data = ['e_labs','radius','e_radius','mass','e_mass'],\ 2208 [e_labs,radius,e_radius,mass,e_mass] 2209 if 'e_labs' in self.results[mtype]['grid'].dtype.names: 2210 for idata,ilabel in zip(data,labels): 2211 self.results[mtype]['grid'][ilabel] = idata 2212 else: 2213 self.results[mtype]['grid'] = pl.mlab.rec_append_fields(self.results[mtype]['grid'],\ 2214 labels,data) 2215 2216 #-- update the confidence intervals 2217 self.calculate_confidence_intervals(mtype=mtype) 2218 logger.info('Added constraint: distance (improved luminosity, added info on radius and mass)')
2219
2220 - def add_constraint_slo(self,numax,Deltanu0,mtype='igrid_search',chi2_type='red'):
2221 """ 2222 Use diagnostics from solar-like oscillations to put additional constraints on the parameters. 2223 2224 If the results are constrained with the distance before L{add_constraint_distance}, 2225 then these results are combined with the SLO constraints. 2226 2227 @param numax: frequency of maximum amplitude 2228 @type numax: 3-tuple (value,error,unit) 2229 @param Deltanu0: large separation (l=0) 2230 @type Deltanu0: 3-tuple (value,error,unit) 2231 @param chi2_type: type of chi2 (raw or reduced) 2232 @type chi2_type: str ('raw' or 'red') 2233 """ 2234 grid = self.results[mtype]['grid'] 2235 #-- we need the teffs, so that we can compute the logg and radius using 2236 # the solar-like oscillations information. 2237 teff = grid['teff'] 2238 logg = grid['logg'] 2239 pvalues = [1-grid['ci_'+chi2_type]] 2240 names = ['ci_'+chi2_type+'_phot'] 2241 2242 #-- we *always* have a logg, that's the way SEDs work: 2243 logg_slo = conversions.derive_logg_slo((teff,'K'),numax) 2244 logg_slo,e_logg_slo = conversions.unumpy.nominal_values(logg_slo),\ 2245 conversions.unumpy.std_devs(logg_slo) 2246 # compute the probabilities: scale to standard normal 2247 logg_prob = scipy.stats.distributions.norm.sf(abs(logg_slo-logg)/e_logg_slo) 2248 2249 pvalues.append(logg_prob) 2250 names.append('ci_logg_slo') 2251 2252 #-- but we only sometimes have a radius (viz. when the distance is 2253 # known). There is an uncertainty on that radius! 2254 radius_slo = conversions.derive_radius_slo(numax,Deltanu0,(teff,'K'),unit='Rsol') 2255 radius_slo,e_radius_slo = conversions.unumpy.nominal_values(radius_slo),\ 2256 conversions.unumpy.std_devs(radius_slo) 2257 if 'radius' in grid.dtype.names: 2258 radius = grid['radius'] 2259 e_radius = grid['e_radius'] 2260 #total_error = np.sqrt( (e_radius_slo**2+e_radius**2)) 2261 total_error = np.sqrt( (e_radius_slo**2+e_radius**2)/2. + (radius-radius_slo)**2/4.) 2262 radius_prob = scipy.stats.distributions.norm.sf(abs(radius_slo-radius)/total_error) 2263 pvalues.append(radius_prob) 2264 names.append('ci_radius_slo') 2265 #-- combined standard deviation and mean for two populations with 2266 # possibly zero intersection (wiki: standard deviation) 2267 #total_error = np.sqrt( (e_radius_slo**2+e_radius**2)/2. + (radius-radius_slo)**2/4.) 2268 total_mean = (radius+radius_slo)/2. 2269 grid['radius'] = total_mean 2270 grid['e_radius'] = total_error 2271 logger.info('Added contraint: combined existing radius estimate with slo estimate') 2272 # otherwise, we just add info on the radius 2273 else: 2274 labs = grid['labs']*conversions.unumpy.uarray([radius_slo,e_radius_slo])**2 2275 labs,e_labs = conversions.unumpy.nominal_values(labs),\ 2276 conversions.unumpy.std_devs(labs) 2277 grid['labs'] = labs 2278 mass = conversions.derive_mass((self.results[mtype]['grid']['logg'],'[cm/s2]'),\ 2279 (radius_slo,e_radius_slo,'Rsol'),unit='Msol') 2280 mass,e_mass = conversions.unumpy.nominal_values(mass),\ 2281 conversions.unumpy.std_devs(mass) 2282 #-- now we can also derive the distance: 2283 scale = conversions.unumpy.uarray([self.results[mtype]['grid']['scale'],\ 2284 self.results[mtype]['grid']['escale']]) 2285 distance = radius_slo/conversions.sqrt(scale) 2286 distance,e_distance = conversions.unumpy.nominal_values(distance),\ 2287 conversions.unumpy.std_devs(distance) 2288 grid = pl.mlab.rec_append_fields(grid,['radius','e_radius','e_labs','mass','e_mass','distance','e_distance'],\ 2289 [radius_slo,e_radius_slo,e_labs,mass,e_mass,distance,e_distance]) 2290 logger.info('Added constraint: {0:s} via slo, improved luminosity'.format(', '.join(['radius','e_radius','e_labs','mass','e_mass']))) 2291 2292 #-- combine p values using Fisher's method 2293 combined_pvals = evaluate.fishers_method(pvalues) 2294 2295 #-- add the information to the grid 2296 self.results[mtype]['grid'] = pl.mlab.rec_append_fields(grid,\ 2297 names,pvalues) 2298 2299 #-- and replace the original confidence intervals, and re-order 2300 self.results[mtype]['grid']['ci_'+chi2_type] = 1-combined_pvals 2301 sa = np.argsort(self.results[mtype]['grid']['ci_'+chi2_type])[::-1] 2302 self.results[mtype]['grid'] = self.results[mtype]['grid'][sa] 2303 2304 #-- update the confidence intervals 2305 self.calculate_confidence_intervals(mtype=mtype) 2306 logger.info('Added constraint: {0:s} via slo and replaced ci_{1:s} with combined CI'.format(', '.join(names),chi2_type))
2307
2308 - def add_constraint_reddening(self,distance=None,ebv=None,e_ebv=0.1,Rv=3.1,\ 2309 model=None,mtype='igrid_search',chi2_type='red',upper_limit=False):
2310 """ 2311 Use reddening maps to put additional constraints on the parameters. 2312 2313 This constraint assumes that, if C{upper_limit=False}, given the 2314 distance to target, the reddening from the reddening maps is similar to 2315 the one derived from the SED. All models where this is not the case will 2316 be deemed improbable. 2317 2318 If C{upper_limit=True}, all models with an E(B-V) value above C{ebv} 2319 will be considered improbable. 2320 2321 When you don't set C{ebv}, the value will, by default, be derived from 2322 Drimmel maps if C{upper_limit=False} and from Schlegel if 2323 C{upper_limit=True}. You can change this behaviour by setting C{model} 2324 manually. 2325 2326 @param distance: distance and uncertainty in parsec 2327 @type distance: tuple (float,float) 2328 @param ebv: E(B-V) reddening in magnitude 2329 @type ebv: float 2330 @param e_ebv: error on the reddening in percentage 2331 @type e_ebv: float 2332 @param model: model reddening maps 2333 @type model: str 2334 @param mtype: type of results to add the information to 2335 @type mtype: str 2336 @param upper_limit: consider the E(B-V) value as an upper limit 2337 @type upper_limit: bool 2338 """ 2339 #-- for upper limits on E(B-V), we best use Schlegel maps by default, 2340 # otherwise we best use Drimmel maps. 2341 if model is None and upper_limit: 2342 model = 'schlegel' 2343 elif model is None: 2344 model = 'drimmel' 2345 #-- if I need to figure out the reddening myself, I need the distance! 2346 if ebv is None and distance is None: 2347 distance = self.get_distance_from_plx(lutz_kelker=False,unit='pc') 2348 #-- let me figure out the reddening if you didn't give it: 2349 if ebv is None: 2350 gal = self.info['galpos'] 2351 ebv = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance[0])/Rv 2352 ebv_u = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance[0]-distance[1])/Rv 2353 ebv_l = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance[0]+distance[1])/Rv 2354 e_ebv = max(ebv-ebv_l,ebv_u-ebv) 2355 else: 2356 e_ebv = 0.1*ebv 2357 grid = self.results[mtype]['grid'] 2358 ebvs = grid['ebv'] 2359 #-- probabilities are differently calculated depending on upper limit. 2360 if not upper_limit: 2361 ebv_prob = scipy.stats.distributions.norm.cdf(abs(ebv-ebvs)/e_ebv) 2362 #-- combine p values using Fisher's method 2363 combined_pvals = evaluate.fishers_method([1-grid['ci_'+chi2_type],ebv_prob]) 2364 grid['ci_'+chi2_type] = 1-combined_pvals 2365 else: 2366 grid['ci_'+chi2_type] = np.where(ebvs<=ebv,grid['ci_'+chi2_type],1.) 2367 2368 #-- and replace the original confidence intervals, and re-order 2369 sa = np.argsort(grid['ci_'+chi2_type])[::-1] 2370 self.results[mtype]['grid'] = grid[sa] 2371 2372 #-- update the confidence intervals 2373 self.calculate_confidence_intervals(mtype=mtype) 2374 logger.info('Added constraint: E(B-V)={0}+/-{1}'.format(ebv,e_ebv))
2375
2376 - def add_constraint_angular_diameter(self,angdiam):
2377 raise NotImplementedError
2378
2379 - def add_constraint_mass(self,mass,mtype='igrid_search',chi2_type='red'):
2380 """ 2381 Add constraints on the mass. 2382 2383 C{mass} must be a tuple, if the second element is smaller than the first, 2384 it is assumed to be (mu,sigma) from a normal distribution. If the second 2385 element is larger than the first, it is assumed to be (lower, upper) 2386 from a uniform distribution. 2387 """ 2388 normal = mass[0]>mass[1] 2389 grid = self.results[mtype]['grid'] 2390 masses = grid['mass'] 2391 if normal: 2392 mass_prob = scipy.stats.distributions.norm.cdf(abs(mass[0]-masses)/mass[1]) 2393 #-- combine p values using Fisher's method 2394 combined_pvals = evaluate.fishers_method([1-grid['ci_'+chi2_type],mass_prob]) 2395 grid['ci_'+chi2_type] = 1-combined_pvals 2396 else: 2397 grid['ci_'+chi2_type] = np.where((masses<=mass[0]) | (mass[1]<=masses),1.,grid['ci_'+chi2_type]) 2398 2399 #-- and replace the original confidence intervals, and re-order 2400 sa = np.argsort(grid['ci_'+chi2_type])[::-1] 2401 self.results[mtype]['grid'] = grid[sa] 2402 2403 #-- update the confidence intervals 2404 self.calculate_confidence_intervals(mtype=mtype) 2405 logger.info('Added constraint: mass={0}+/-{1}'.format(*mass))
2406 2407 # def add_constraint_evolution_models(self,models='siess2000',\ 2408 # ylabels=['age','labs','radius'],e_y=None, 2409 # function='linear',mtype='igrid_search',chi2_type='red'): 2410 # """ 2411 # Use stellar evolutionary models to put additional constraints on the parameters. 2412 # """ 2413 # grid = self.results[mtype]['grid'] 2414 # 2415 # #-- make sure y's and ylabels are iterable 2416 # if isinstance(ylabels,str): 2417 # ylabels = [ylabels] 2418 # #-- cycle over all yvalues and compute the pvalues 2419 # pvalues = [1-grid['ci_'+chi2_type]] 2420 # add_info = [] 2421 # #-- create the evolutionary grid and interpolate the stellar evolutinary 2422 # # grid on the SED-integrated grid points 2423 # output = np.array([evolutionmodels.get_itable(iteff,ilogg,iz) for iteff,ilogg,iz in zip(grid['teff'],grid['logg'],grid['z'])]) 2424 # output = np.rec.fromarrays(output.T,names=['age','labs','radius']) 2425 # for label in ylabels: 2426 # y_interpolated = output[label] 2427 # add_info.append(y_interpolated) 2428 # #-- only add the contraint when it is possible to do so. 2429 # if not label in grid.dtype.names: 2430 # logger.info("Cannot put constraint on {} (not a parameter)".format(label)) 2431 # continue 2432 # y_computed = grid[label] 2433 # #-- error on the y-value: either it is computed, it is given or it is 2434 # # assumed it is 10% of the value 2435 # if e_y is None and ('e_'+label) in grid.dtype.names: 2436 # e_y = np.sqrt(grid['e_'+label]**2+(0.1*y_interpolated)**2) 2437 # elif e_y is None: 2438 # e_y = np.sqrt((0.1*y_computed)**2+(0.1*y_interpolated)**2) 2439 # y_prob = scipy.stats.distributions.norm.sf(abs(y_computed-y_interpolated)/e_y) 2440 # pl.figure() 2441 # pl.subplot(221) 2442 # pl.title(label) 2443 # pl.scatter(y_computed,y_interpolated,c=y_prob,edgecolors='none',cmap=pl.cm.spectral) 2444 # pl.plot([pl.xlim()[0],pl.xlim()[1]],[pl.xlim()[0],pl.xlim()[1]],'r-',lw=2) 2445 # pl.xlim(pl.xlim()) 2446 # pl.ylim(pl.xlim()) 2447 # pl.xlabel('Computed') 2448 # pl.ylabel('Interpolated') 2449 # pl.colorbar() 2450 # pl.subplot(223) 2451 # pl.title('y_interpolated') 2452 # pl.scatter(grid['teff'],grid['logg'],c=y_interpolated,edgecolors='none',cmap=pl.cm.spectral) 2453 # pl.colorbar() 2454 # pl.subplot(224) 2455 # pl.title('y_computed') 2456 # pl.scatter(grid['teff'],grid['logg'],c=y_computed,edgecolors='none',cmap=pl.cm.spectral) 2457 # pl.colorbar() 2458 # pvalues.append(y_prob) 2459 # 2460 # pl.figure() 2461 # pl.subplot(221) 2462 # pl.title('p1') 2463 # sa = np.argsort(pvalues[0]) 2464 # pl.scatter(grid['labs'][sa],grid['radius'][sa],c=pvalues[0][sa],edgecolors='none',cmap=pl.cm.spectral) 2465 # pl.colorbar() 2466 # pl.xlabel('labs') 2467 # pl.ylabel('radius') 2468 # pl.subplot(222) 2469 # pl.title('p2') 2470 # sa = np.argsort(pvalues[1]) 2471 # pl.scatter(grid['labs'][sa],grid['radius'][sa],c=pvalues[1][sa],edgecolors='none',cmap=pl.cm.spectral) 2472 # pl.colorbar() 2473 # pl.xlabel('labs') 2474 # pl.ylabel('radius') 2475 # pl.subplot(223) 2476 # pl.title('p3') 2477 # sa = np.argsort(pvalues[2]) 2478 # pl.scatter(grid['labs'][sa],grid['radius'][sa],c=pvalues[2][sa],edgecolors='none',cmap=pl.cm.spectral) 2479 # pl.colorbar() 2480 # pl.xlabel('labs') 2481 # pl.ylabel('radius') 2482 # 2483 # #-- combine p values using Fisher's method 2484 # combined_pvals = evaluate.fishers_method(pvalues) 2485 # pl.subplot(224) 2486 # pl.title('pcombined') 2487 # sa = np.argsort(combined_pvals) 2488 # pl.scatter(grid['labs'][sa],grid['radius'][sa],c=combined_pvals[sa],edgecolors='none',cmap=pl.cm.spectral) 2489 # pl.colorbar() 2490 # pl.xlabel('labs') 2491 # pl.ylabel('radius') 2492 # pl.show() 2493 # 2494 # #-- add the information to the grid (assume that if there one label 2495 # # not in there already, none of them are 2496 # if not 'c_'+ylabels[0] in grid.dtype.names: 2497 # self.results[mtype]['grid'] = pl.mlab.rec_append_fields(grid,\ 2498 # ['c_'+ylabel for ylabel in ylabels],add_info) 2499 # #-- if they are already in there, overwrite! 2500 # else: 2501 # for info,ylabel in zip(add_info,ylabels): 2502 # self.results[mtype]['grid']['c_'+ylabel] = add_info 2503 # 2504 # #-- and replace the original confidence intervals, and re-order 2505 # self.results[mtype]['grid']['ci_'+chi2_type] = 1-combined_pvals 2506 # 2507 # sa = np.argsort(self.results[mtype]['grid']['ci_'+chi2_type])[::-1] 2508 # self.results[mtype]['grid'] = self.results[mtype]['grid'][sa] 2509 # 2510 # #-- update the confidence intervals 2511 # self.calculate_confidence_intervals(mtype=mtype) 2512 # logger.info('Added constraint: {0:s} via stellar models and replaced ci_{1:s} with combined CI'.format(', '.join(ylabels),chi2_type)) 2513 # 2514 # 2515 #} 2516 2517 #{ Plotting routines 2518
2519 - def _label_dict(self, param):
2520 """ 2521 Returns the label belonging to a certain parameter 2522 2523 If the label is not present, the function will just return param. 2524 """ 2525 #split parameter in param name and componentent number 2526 param, component = re.findall('(.*?)(\d?$)', param)[0] 2527 2528 ldict = dict(teff='Effective temperature [K]',\ 2529 z='log (Metallicity Z [$Z_\odot$]) [dex]',\ 2530 logg=r'log (surface gravity [cm s$^{-2}$]) [dex]',\ 2531 ebv='E(B-V) [mag]',\ 2532 ci_raw='Raw probability [%]',\ 2533 ci_red='Reduced probability [%]',\ 2534 #labs=r'log (Absolute Luminosity [$L_\odot$]) [dex]',\ 2535 labs=r'Absolute Luminosity [$L_\odot$]',\ 2536 radius=r'Radius [$R_\odot$]',\ 2537 mass=r'Mass [$M_\odot$]', 2538 mc=r'MC [Nr. points in hexagonal bin]', 2539 rv=r'Extinction parameter $R_v$') 2540 if param in ldict: 2541 param = ldict[param] 2542 if component != '': 2543 return param + " - " + component 2544 else: 2545 return param
2546 2547 @standalone_figure
2548 - def plot_grid(self,x='teff',y='logg',ptype='ci_red',mtype='igrid_search',limit=0.95,d=None,**kwargs):
2549 """ 2550 Plot grid as scatter plot 2551 2552 PrameterC{ptype} sets the colors of the scattered points (e.g., 'ci_red','z','ebv'). 2553 2554 Example usage: 2555 2556 First set the SED: 2557 2558 >>> mysed = SED('HD180642') 2559 >>> mysed.load_fits() 2560 2561 Then make the plots: 2562 2563 >>> p = pl.figure() 2564 >>> p = pl.subplot(221);mysed.plot_grid(limit=None) 2565 >>> p = pl.subplot(222);mysed.plot_grid(x='ebv',y='z',limit=None) 2566 >>> p = pl.subplot(223);mysed.plot_grid(x='teff',y='ebv',limit=None) 2567 >>> p = pl.subplot(224);mysed.plot_grid(x='logg',y='z',limit=None) 2568 2569 ]]include figure]]ivs_sed_builder_plot_grid_01.png] 2570 2571 """ 2572 #-- if no distance is set, derive the most likely distance from the plx: 2573 #if d is None: 2574 #try: 2575 #d = self.get_distance_from_plx() 2576 #except ValueError: 2577 #d = None 2578 #logger.info('Distance to {0} unknown'.format(self.ID)) 2579 #if isinstance(d,tuple): 2580 #d = d[0][np.argmax(d[1])] 2581 ##-- it's possible that we still don't have a distance 2582 #if d is not None: 2583 #logger.info('Assumed distance to {0} = {1:.3e} pc'.format(self.ID,d)) 2584 #radius = d*np.sqrt(self.results[mtype]['grid']['scale']) 2585 #radius = conversions.convert('pc','Rsol',radius) # in Rsol 2586 #labs = np.log10(self.results[mtype]['grid']['labs']*radius**2) # in [Lsol] 2587 #mass = conversions.derive_mass((self.results[mtype]['grid']['logg'].copy(),'[cm/s2]'),\ 2588 #(radius,'Rsol'),unit='Msol') 2589 #-- compute angular diameter 2590 theta = 2*conversions.convert('sr','mas',self.results[mtype]['grid']['scale']) 2591 2592 if limit is not None: 2593 region = self.results[mtype]['grid']['ci_red']<=limit 2594 else: 2595 region = self.results[mtype]['grid']['ci_red']<np.inf 2596 #-- get the colors and the color scale 2597 if d is not None and ptype=='labs': 2598 colors = locals()[ptype][region] 2599 elif ptype in self.results[mtype]['grid'].dtype.names: 2600 colors = self.results[mtype]['grid'][ptype][region] 2601 elif isinstance(ptype,str): 2602 colors = locals()[ptype][region] 2603 else: 2604 colors = ptype[region] 2605 ptype = 'custom variable' 2606 2607 if 'ci_' in ptype.lower(): 2608 colors *= 100. 2609 vmin = colors.min() 2610 vmax = 95. 2611 else: 2612 vmin = kwargs.pop('vmin',colors.min()) 2613 vmax = kwargs.pop('vmax',colors.max()) 2614 2615 #-- define abbrevation for plotting 2616 if x in self.results[mtype]['grid'].dtype.names: 2617 X = self.results[mtype]['grid'][x] 2618 else: 2619 X = locals()[x] 2620 2621 if y in self.results[mtype]['grid'].dtype.names: 2622 Y = self.results[mtype]['grid'][y] 2623 else: 2624 Y = locals()[y] 2625 2626 #-- make the plot 2627 if mtype == 'imc': 2628 pl.hexbin(X,Y,mincnt=1,cmap=pl.cm.spectral) #bins='log' 2629 ptype = 'mc' 2630 2631 #-- set the limits 2632 pl.xlim(X.max(),X.min()) 2633 pl.ylim(Y.max(),Y.min()) 2634 cbar = pl.colorbar() 2635 else: 2636 #if limit is not None: 2637 #region = self.results[mtype]['grid']['ci_red']<limit 2638 #else: 2639 #region = self.results[mtype]['grid']['ci_red']<np.inf 2640 ##-- get the colors and the color scale 2641 #if d is not None and ptype=='labs': 2642 #colors = locals()[ptype][region] 2643 #elif ptype in self.results[mtype]['grid'].dtype.names: 2644 #colors = self.results[mtype]['grid'][ptype][region] 2645 #else: 2646 #colors = locals()[ptype][region] 2647 2648 #if 'ci' in ptype: 2649 #colors *= 100. 2650 #vmin = colors.min() 2651 #vmax = 95. 2652 #else: 2653 #vmin = kwargs.pop('vmin',colors.min()) 2654 #vmax = kwargs.pop('vmax',colors.max()) 2655 2656 #-- grid scatter plot 2657 pl.scatter(X[region],Y[region], 2658 c=colors,edgecolors='none',cmap=pl.cm.spectral,vmin=vmin,vmax=vmax) 2659 #-- set the limits to only include the 95 interval 2660 pl.xlim(X[region].max(),X[region].min()) 2661 pl.ylim(Y[region].max(),Y[region].min()) 2662 cbar = pl.colorbar() 2663 2664 #-- mark best value 2665 pl.plot(X[-1],Y[-1],'r+',ms=40,mew=3) 2666 2667 pl.xlabel(self._label_dict(x)) 2668 pl.ylabel(self._label_dict(y)) 2669 cbar.set_label(self._label_dict(ptype)) 2670 2671 logger.info('Plotted %s-%s diagram of %s'%(x,y,ptype))
2672 2673 @standalone_figure
2674 - def plot_CI2D(self,xpar='teff',ypar='logg',mtype='iminimize', ptype='ci_red',**kwargs):
2675 """ 2676 Plot a 2D confidence intervall calculated using the CI2D computation 2677 from calculate_iminimize_CI2D. Make sure you first calculate the grid 2678 you want using the calculate_iminimize_CI2D function. 2679 """ 2680 2681 grid = self.results[mtype]['CI2D'][xpar+"-"+ypar][ptype] 2682 x = self.results[mtype]['CI2D'][xpar+"-"+ypar]['x'] 2683 y = self.results[mtype]['CI2D'][xpar+"-"+ypar]['y'] 2684 2685 if ptype == 'ci_red' or ptype == 'ci_raw': 2686 grid = grid*100.0 2687 levels = np.linspace(0,100,25) 2688 ticks = [0,20,40,60,80,100] 2689 elif ptype == 'ci_chi2': 2690 grid = np.log10(grid) 2691 levels = np.linspace(np.min(grid), np.max(grid), 25) 2692 ticks = np.round(np.linspace(np.min(grid), np.max(grid), 11), 2) 2693 2694 pl.contourf(x,y,grid,levels,**kwargs) 2695 pl.xlabel(self._label_dict(xpar)) 2696 pl.ylabel(self._label_dict(ypar)) 2697 cbar = pl.colorbar(fraction=0.08,ticks=ticks) 2698 cbar.set_label(ptype!='ci_chi2' and r'Probability' or r'$^{10}$log($\chi^2$)')
2699 2700 2701 2702 2703 2704 @standalone_figure
2705 - def plot_data(self,colors=False, plot_unselected=True, 2706 unit_wavelength='angstrom',unit_flux=None,**kwargs):
2707 """ 2708 Plot only the SED data. 2709 2710 Extra kwargs are passed to plotting functions. 2711 2712 The decorator provides an keyword C{savefig}. When set to C{True}, an 2713 image name is generated automatically (very long!), and the figure is 2714 closed. When set to a string, the image is saved with that name, and 2715 the figure is closed. When C{savefig} is not given, the image will 2716 stay open so that the user can still access all plot elements and do 2717 further enhancements. 2718 2719 @param colors: if False, plot absolute values, otherwise plot colors 2720 (flux ratios) 2721 @type colors: boolean 2722 @param plot_unselected: if True, all photometry is plotted, otherwise 2723 only those that are selected 2724 """ 2725 if not plot_unselected: 2726 master = self.master[self.master['include']] 2727 else: 2728 master = self.master 2729 if unit_flux is None: 2730 unit_flux = master['cunit'][0] 2731 wave,flux,e_flux = master['cwave'],master['cmeas'],master['e_cmeas'] 2732 sources = master['source'] 2733 iscolor = np.array(master['color'],bool) 2734 photbands = master['photband'] 2735 indices = np.arange(len(master)) 2736 2737 allsystems = np.array([i.split('.')[0] for i in photbands]) 2738 systems = sorted(set(allsystems)) 2739 color_cycle = [pl.cm.spectral(j) for j in np.linspace(0, 1.0, len(systems))] 2740 2741 if not colors: 2742 color_cycle = itertools.cycle(color_cycle) 2743 pl.gca().set_xscale('log',nonposx='clip') 2744 pl.gca().set_yscale('log',nonposy='clip') 2745 wave = conversions.convert('angstrom',unit_wavelength,wave) 2746 flux,e_flux = conversions.convert(master['cunit'][0],unit_flux,flux,e_flux,wave=(wave,unit_wavelength)) 2747 mf = [] 2748 # plot photometric systems in different colors 2749 for system in systems: 2750 keep = (allsystems==system) & -iscolor 2751 if keep.sum(): 2752 # plot each photometric points separately, so that we could 2753 # use it interactively. Label them all with a unique ID 2754 # and make them pickable. 2755 color = color_cycle.next() 2756 #for i in range(sum(keep)): 2757 #label = system if i==0 else '_nolegend_' 2758 #pltlin,caplins,barlincs = pl.errorbar(wave[keep][i],flux[keep][i],yerr=e_flux[keep][i],fmt='o',label=label,ms=7,picker=5,color=color,**kwargs) 2759 #pltlin.sed_index = indices[keep][i] 2760 #caplins[0].sed_index = indices[keep][i] 2761 #caplins[1].sed_index = indices[keep][i] 2762 #barlincs[0].sed_index = indices[keep][i] 2763 pl.errorbar(wave[keep],flux[keep],yerr=e_flux[keep],fmt='o',label=system,ms=7,color=color,**kwargs) 2764 mf.append(flux[keep]) 2765 if keep.sum(): 2766 pl.ylabel(conversions.unit2texlabel(unit_flux,full=True)) 2767 pl.xlabel('Wavelength [{0}]'.format(conversions.unit2texlabel(unit_wavelength))) 2768 #-- scale y-axis (sometimes necessary for data with huge errorbars) 2769 mf = np.log10(np.hstack(mf)) 2770 lmin,lmax = np.nanmin(mf),np.nanmax(mf) 2771 lrange = np.abs(lmin-lmax) 2772 pl.ylim(10**(lmin-0.1*lrange),10**(lmax+0.1*lrange)) 2773 else: 2774 pl.gca().set_color_cycle(color_cycle) 2775 names = [] 2776 start_index = 1 2777 for system in systems: 2778 keep = (allsystems==system) & iscolor 2779 if keep.sum(): 2780 pl.errorbar(range(start_index,start_index+keep.sum()),flux[keep],yerr=e_flux[keep],fmt='o',label=system,ms=7,**kwargs) 2781 names += [ph.split('.')[1] for ph in photbands[keep]] 2782 start_index += keep.sum() 2783 pl.xticks(range(1,len(names)+1),names,rotation=90) 2784 pl.ylabel(r'Flux ratio') 2785 pl.xlabel('Index') 2786 leg = pl.legend(prop=dict(size='small'),loc='best',fancybox=True) 2787 leg.get_frame().set_alpha(0.5) 2788 pl.grid() 2789 pl.title(self.ID)
2790 2791 2792 @standalone_figure
2793 - def plot_sed(self,colors=False,mtype='igrid_search',plot_deredded=False, 2794 plot_unselected=True,wave_units='AA',flux_units='erg/s/cm2',**kwargs):
2795 """ 2796 Plot a fitted SED together with the data. 2797 2798 Example usage: 2799 2800 First set the SED: 2801 2802 >>> mysed = SED('HD180642') 2803 >>> mysed.load_fits() 2804 2805 Then make the plots: 2806 2807 >>> p = pl.figure() 2808 >>> p = pl.subplot(121) 2809 >>> mysed.plot_sed(colors=False) 2810 >>> p = pl.subplot(122) 2811 >>> mysed.plot_sed(colors=True) 2812 2813 ]]include figure]]ivs_sed_builder_plot_sed_01.png] 2814 """ 2815 2816 def plot_sed_getcolors(master,color_dict=None): 2817 myphotbands = [iphotb.split('.')[1] for iphotb in master['photband'][master['color']]] 2818 if not myphotbands: #-- If there are no colours none can be returned (added by joris 30-01-2012) 2819 return [],[],[],None 2820 if color_dict is None: 2821 color_dict = {myphotbands[0]:0} 2822 for mycol in myphotbands[1:]: 2823 if not mycol in color_dict: 2824 max_int = max([color_dict[entry] for entry in color_dict]) 2825 color_dict[mycol] = max_int+1 2826 x = [color_dict[mycol] for mycol in myphotbands] 2827 y = master['cmeas'] 2828 e_y = master['e_cmeas'] 2829 return x,y,e_y,color_dict
2830 2831 2832 x__,y__,e_y__,color_dict = plot_sed_getcolors(self.master) 2833 2834 #-- get the color cycle 2835 systems = np.array([system.split('.')[0] for system in self.master['photband']],str) 2836 set_systems = sorted(list(set(systems))) 2837 color_cycle = itertools.cycle([pl.cm.spectral(j) for j in np.linspace(0, 1.0, len(set_systems))]) 2838 2839 #-- for plotting reasons, we translate every color to an integer 2840 for system in set_systems: 2841 color = color_cycle.next() 2842 keep = (systems==system) & (self.master['color']==colors) 2843 if not plot_unselected: 2844 keep = keep & self.master['include'] 2845 #-- synthetic: 2846 if sum(keep) and mtype in self.results and 'synflux' in self.results[mtype]: 2847 if colors: 2848 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 2849 y = self.results[mtype]['synflux'][1][keep] 2850 else: 2851 x = self.results[mtype]['synflux'][0][keep] 2852 y = self.results[mtype]['synflux'][1][keep] 2853 #-- convert to correct units 2854 y = conversions.convert('erg/s/cm2/AA',flux_units,y,wave=(x,'AA')) 2855 x = conversions.convert('AA',wave_units,x) 2856 pl.plot(x,y,'x',ms=10,mew=2,alpha=0.75,color=color,**kwargs) 2857 #-- include: 2858 keep = (systems==system) & (self.master['color']==colors) & self.master['include'] 2859 if sum(keep): 2860 if colors: 2861 #-- translate every color to an integer 2862 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 2863 else: 2864 if mtype in self.results and 'synflux' in self.results[mtype]: 2865 x = self.results[mtype]['synflux'][0][keep] 2866 else: 2867 x = self.master['cwave'][keep] 2868 y = self.master['cmeas'][keep] 2869 e_y = self.master['e_cmeas'][keep] 2870 y,e_y = conversions.convert('erg/s/cm2/AA',flux_units,y,e_y,wave=(x,'AA')) 2871 x = conversions.convert('AA',wave_units,x) 2872 2873 2874 p = pl.errorbar(x,y,yerr=e_y,fmt='o',label=system, 2875 capsize=10,ms=7,color=color,**kwargs) 2876 2877 #-- exclude: 2878 label = np.any(keep) and '_nolegend_' or system 2879 keep = (systems==system) & (self.master['color']==colors) & -self.master['include'] 2880 if sum(keep) and plot_unselected: 2881 if colors: 2882 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 2883 else: 2884 x = self.results[mtype]['synflux'][0][keep] 2885 if np.any(np.isnan(x)): 2886 x = self.master['cwave'][keep] 2887 y = self.master['cmeas'][keep] 2888 e_y = self.master['e_cmeas'][keep] 2889 y,e_y = conversions.convert('erg/s/cm2/AA',flux_units,y,e_y,wave=(x,'AA')) 2890 x = conversions.convert('AA',wave_units,x) 2891 pl.errorbar(x,y,yerr=e_y,fmt='o',label=label, 2892 capsize=10,ms=7,mew=2,color=color,mfc='1',mec=color,**kwargs) 2893 2894 #-- only set logarithmic scale if absolute fluxes are plotted 2895 # and only plot the real model then 2896 if not colors: 2897 pl.gca().set_xscale('log',nonposx='clip') 2898 pl.gca().set_yscale('log',nonposy='clip') 2899 pl.gca().set_autoscale_on(False) 2900 2901 #-- the model 2902 if mtype in self.results and 'model' in self.results[mtype]: 2903 wave,flux,flux_ur = self.results[mtype]['model'] 2904 2905 flux = conversions.convert('erg/s/cm2/AA',flux_units,flux,wave=(wave,'AA')) 2906 flux_ur = conversions.convert('erg/s/cm2/AA',flux_units,flux_ur,wave=(wave,'AA')) 2907 wave = conversions.convert('AA',wave_units,wave) 2908 2909 pl.plot(wave,flux,'r-',**kwargs) 2910 if plot_deredded: 2911 pl.plot(wave,flux_ur,'k-',**kwargs) 2912 pl.ylabel(conversions.unit2texlabel(flux_units,full=True)) 2913 pl.xlabel('wavelength [{0}]'.format(conversions.unit2texlabel(wave_units))) 2914 else: 2915 xlabels = color_dict.keys() 2916 xticks = [color_dict[key] for key in xlabels] 2917 pl.xticks(xticks,xlabels,rotation=90) 2918 pl.ylabel(r'Flux ratio') 2919 pl.xlabel('Color name') 2920 pl.xlim(min(xticks)-0.5,max(xticks)+0.5) 2921 2922 pl.grid() 2923 if colors: 2924 leg = pl.legend(loc='best',prop=dict(size='x-small')) 2925 else: 2926 leg = pl.legend(loc='upper right',prop=dict(size='x-small')) 2927 leg.get_frame().set_alpha(0.5) 2928 loc = (0.45,0.05) 2929 if mtype in self.results and 'grid' in self.results[mtype]: 2930 teff = self.results[mtype]['grid']['teff'][-1] 2931 logg = self.results[mtype]['grid']['logg'][-1] 2932 ebv = self.results[mtype]['grid']['ebv'][-1] 2933 scale = self.results[mtype]['grid']['scale'][-1] 2934 angdiam = 2*conversions.convert('sr','mas',scale) 2935 try: 2936 teff2 = self.results[mtype]['CI']['teff2'] 2937 logg2 = self.results[mtype]['CI']['logg2'] 2938 radii = self.results[mtype]['CI']['rad2']/self.results[mtype]['CI']['rad'] 2939 pl.annotate('Teff=%i %i K\nlogg=%.2f %.2f cgs\nE(B-V)=%.3f mag\nr2/r1=%.2f\n$\Theta$=%.3g mas'%(teff,teff2,logg,logg2,ebv,radii,angdiam), 2940 loc,xycoords='axes fraction') 2941 except: 2942 pl.annotate('Teff=%d K\nlogg=%.2f cgs\nE(B-V)=%.3f mag\n$\Theta$=%.3g mas'%(teff,logg,ebv,angdiam), 2943 loc,xycoords='axes fraction') 2944 logger.info('Plotted SED as %s'%(colors and 'colors' or 'absolute fluxes'))
2945 2946 @standalone_figure
2947 - def plot_chi2(self,colors=False,mtype='igrid_search'):
2948 """ 2949 Plot chi2 statistic for every datapoint included in the fit. 2950 2951 To plot the statistic from the included absolute values, set 2952 C{colors=False}. 2953 2954 To plot the statistic from the included colors, set C{colors=True}. 2955 2956 Example usage: 2957 2958 First set the SED: 2959 2960 >>> mysed = SED('HD180642') 2961 >>> mysed.load_fits() 2962 2963 Then make the plots: 2964 2965 >>> p = pl.figure() 2966 >>> p = pl.subplot(121) 2967 >>> mysed.plot_chi2(colors=False) 2968 >>> p = pl.subplot(122) 2969 >>> mysed.plot_chi2(colors=True) 2970 2971 ]]include figure]]ivs_sed_builder_plot_chi2_01.png] 2972 2973 @param colors: flag to distinguish between colors and absolute values 2974 @type colors: boolean 2975 """ 2976 2977 include_grid = self.master['include'] 2978 systems = np.array([system.split('.')[0] for system in self.master['photband'][include_grid]],str) 2979 set_systems = sorted(list(set(systems))) 2980 color_cycle = itertools.cycle([pl.cm.spectral(i) for i in np.linspace(0, 1.0, len(set_systems))]) 2981 if mtype in self.results: 2982 eff_waves,synflux,photbands = self.results[mtype]['synflux'] 2983 chi2 = self.results[mtype]['chi2'] 2984 for system in set_systems: 2985 color = color_cycle.next() 2986 keep = systems==system 2987 if sum(keep) and not colors: 2988 try: 2989 pl.loglog(eff_waves[include_grid][keep],chi2[include_grid][keep],'o',label=system,color=color) 2990 except: 2991 logger.critical('Plotting of CHI2 of absolute values failed') 2992 elif sum(keep) and colors: 2993 pl.semilogy(range(len(eff_waves[include_grid][keep])),chi2[include_grid][keep],'o',label=system,color=color) 2994 pl.legend(loc='upper right',prop=dict(size='x-small')) 2995 pl.grid() 2996 pl.annotate('Total $\chi^2$ = %.1f'%(self.results[mtype]['grid']['chisq'][-1]),(0.59,0.120),xycoords='axes fraction',color='r') 2997 #pl.annotate('Total Reduced $\chi^2$ = %0.2f'%(sum(chi2[include_grid][keep])),(0.59,0.075),xycoords='axes fraction',color='r') 2998 if 'factor' in self.results[mtype]: 2999 pl.annotate('Error scale = %.2f'%(np.sqrt(self.results[mtype]['factor'])),(0.59,0.030),xycoords='axes fraction',color='k') 3000 xlims = pl.xlim() 3001 pl.plot(xlims,[self.results[mtype]['grid']['chisq'][-1],self.results[mtype]['grid']['chisq'][-1]],'r-',lw=2) 3002 pl.xlim(xlims) 3003 pl.xlabel('wavelength [$\AA$]') 3004 pl.ylabel(r'Reduced ($\chi^2$)') 3005 logger.info('Plotted CHI2 of %s'%(colors and 'colors' or 'absolute fluxes')) 3006 else: 3007 logger.info('%s not in results, no plot could be made.'%(mtype))
3008 3009 3010 @standalone_figure
3011 - def plot_distance(self,mtype='igrid_search'):
3012 try: 3013 #-- necessary information 3014 (d_models,d_prob_models,radii) = self.results['igrid_search']['d_mod'] 3015 (d,dprob) = self.results['igrid_search']['d'] 3016 3017 ax_d = pl.gca() 3018 3019 gal = self.info['galpos'] 3020 #-- the plot 3021 dzoom = dprob>1e-4 3022 pl.plot(d,dprob,'k-') 3023 pl.grid() 3024 pl.xlabel('Distance [pc]') 3025 pl.ylabel('Probability [unnormalized]') 3026 pl.xlim(d[dzoom].min(),d[dzoom].max()) 3027 xlims = pl.xlim() 3028 pl.twiny(ax_d) 3029 pl.xlim(xlims) 3030 xticks = pl.xticks() 3031 pl.xticks(xticks[0],['%.2f'%(conversions.convert('pc','Rsol',np.sqrt(self.results['igrid_search']['grid']['scale'][-1])*di)) for di in xticks[0]]) 3032 pl.xlabel('Radius [$R_\odot$]') 3033 pl.twinx(ax_d) 3034 res = 100 3035 d_keep = (xlims[0]<=d[::res]) & (d[::res]<=xlims[1]) 3036 if len(self.results['igrid_search']['drimmel']): 3037 pl.plot(d[::res][d_keep],self.results['igrid_search']['drimmel'].ravel()[d_keep],'b-',label='Drimmel') 3038 if len(self.results['igrid_search']['marshall']): 3039 pl.plot(d[::res][d_keep],self.results['igrid_search']['marshall'].ravel()[d_keep],'b--',label='Marshall') 3040 ebv = self.results[mtype]['grid']['ebv'][-1] 3041 pl.plot(xlims,[ebv*3.1,ebv*3.1],'r--',lw=2,label='measured') 3042 pl.ylabel('Visual extinction $A_v$ [mag]') 3043 pl.legend(loc='lower right',prop=dict(size='x-small')) 3044 pl.xlim(xlims) 3045 logger.info('Plotted distance/reddening') 3046 except KeyError: 3047 logger.info('No distance/reddening plotted due to KeyError.')
3048 3049 @standalone_figure
3050 - def plot_grid_model(self,ptype='prob'):
3051 """ 3052 Grid of models 3053 """ 3054 if 'spType' in self.info: 3055 pl.title(self.info['spType']) 3056 cutlogg = (self.results['igrid_search']['grid']['logg']<=4.4) & (self.results['igrid_search']['grid']['ci_red']<=0.95) 3057 (d_models,d_prob_models,radii) = self.results['igrid_search']['d_mod'] 3058 (d,dprob) = self.results['igrid_search']['d'] 3059 gal = self.info['galpos'] 3060 3061 n = 75000 3062 region = self.results['igrid_search']['grid']['ci_red']<0.95 3063 total_prob = 100-(1-self.results['igrid_search']['grid']['ci_red'][cutlogg][-n:])*d_prob_models*100 3064 tp_sa = np.argsort(total_prob)[::-1] 3065 if ptype=='prob': 3066 pl.scatter(self.results['igrid_search']['grid']['teff'][cutlogg][-n:][tp_sa],self.results['igrid_search']['grid']['logg'][cutlogg][-n:][tp_sa], 3067 c=total_prob[tp_sa],edgecolors='none',cmap=pl.cm.spectral, 3068 vmin=total_prob.min(),vmax=total_prob.max()) 3069 elif ptype=='radii': 3070 pl.scatter(self.results['igrid_search']['grid']['teff'][cutlogg][-n:][tp_sa],self.results['igrid_search']['grid']['logg'][cutlogg][-n:][tp_sa], 3071 c=radii,edgecolors='none',cmap=pl.cm.spectral, 3072 vmin=radii.min(),vmax=radii.max()) 3073 pl.xlim(self.results['igrid_search']['grid']['teff'][region].max(),self.results['igrid_search']['grid']['teff'][region].min()) 3074 pl.ylim(self.results['igrid_search']['grid']['logg'][region].max(),self.results['igrid_search']['grid']['logg'][region].min()) 3075 cbar = pl.colorbar() 3076 pl.xlabel('log (effective temperature [K]) [dex]') 3077 pl.ylabel(r'log (surface gravity [cm s$^{-2}$]) [dex]') 3078 3079 if ptype=='prob': 3080 cbar.set_label('Probability (incl. plx) [%]') 3081 elif ptype=='radii': 3082 cbar.set_label('Model radii [$R_\odot$]') 3083 logger.info('Plotted teff-logg diagram of models (%s)'%(ptype))
3084 3085 3086 @standalone_figure
3087 - def plot_MW_side(self):
3088 #im = image.open(config.get_datafile('images','NRmilkyway.tif')) 3089 fn_im = os.path.join(cc.path.ivsdata,'images','NRmilkyway.tif') 3090 im = image.open(fn_im) 3091 left,bottom,width,height = 0.0,0.0,1.0,1.0 3092 startm,endm = 183,-177 3093 startv,endv = -89,91 3094 3095 xwidth = startm-endm 3096 ywidth = 90. 3097 ratio = ywidth/xwidth 3098 3099 gal = list(self.info['galpos']) 3100 if gal[0]>180: 3101 gal[0] = gal[0] - 360. 3102 #-- boundaries of ESO image 3103 pl.imshow(im,extent=[startm,endm,startv,endv],origin='lower') 3104 pl.plot(gal[0],gal[1],'rx',ms=15,mew=2) 3105 pl.xlim(startm,endm) 3106 pl.ylim(startv,endv) 3107 pl.xticks([]) 3108 pl.yticks([])
3109 3110 @standalone_figure
3111 - def plot_MW_top(self):
3112 #im = image.open(config.get_datafile('images','topmilkyway.jpg')) 3113 fn_im = os.path.join(cc.path.ivsdata,'images','topmilkyway.jpg') 3114 im = image.open(fn_im) 3115 pl.imshow(im,origin='lower') 3116 pl.box(on=False) 3117 pl.xticks([]) 3118 pl.yticks([]) 3119 xlims = pl.xlim() 3120 ylims = pl.ylim() 3121 gal = self.info['galpos'] 3122 pl.plot(2800,1720,'ro',ms=10) 3123 pl.plot([2800,-5000*np.sin(gal[0]/180.*np.pi)+2800],[1720,5000*np.cos(gal[0]/180.*np.pi)+1720],'r-',lw=2) 3124 3125 #-- necessary information 3126 if 'igrid_search' in self.results and 'd_mod' in self.results['igrid_search']: 3127 (d_models,d_prob_models,radii) = self.results['igrid_search']['d_mod'] 3128 (d,dprob) = self.results['igrid_search']['d'] 3129 else: 3130 d = np.linspace(0,1000,2) 3131 dprob = np.zeros(len(d)) 3132 3133 x = d/10.*1.3 3134 y = dprob*1000. 3135 theta = gal[0]/180.*np.pi + np.pi/2. 3136 x_ = np.cos(theta)*x - np.sin(theta)*y + 2800 3137 y_ = np.sin(theta)*x + np.cos(theta)*y + 1720 3138 3139 pl.plot(x_,y_,'r-',lw=2) 3140 index = np.argmax(y) 3141 pl.plot(np.cos(theta)*x[index] + 2800,np.sin(theta)*x[index] + 1720,'rx',ms=15,mew=2) 3142 3143 pl.xlim(xlims) 3144 pl.ylim(ylims)
3145 3146 @standalone_figure
3147 - def plot_finderchart(self,cmap_photometry=pl.cm.spectral,window_size=5.):
3148 """ 3149 Size is x and y width in arcminutes 3150 """ 3151 try: 3152 dec = 'jdedeg' in self.info and self.info['jdedeg'] or None 3153 ra = 'jradeg' in self.info and self.info['jradeg'] or None 3154 data,coords,size = mast.get_dss_image(self.ID,ra=ra,dec=dec) 3155 pl.imshow(data[::-1],extent=[-size[0]/2*60,size[0]/2*60, 3156 -size[1]/2*60,size[1]/2*60],cmap=pl.cm.RdGy_r)#Greys 3157 pl.xlim(-window_size/2.,+window_size/2.) 3158 pl.ylim(-window_size/2.,+window_size/2.) 3159 xlims,ylims = pl.xlim(),pl.ylim() 3160 keep_this = -self.master['color'] & (self.master['cmeas']>0) 3161 toplot = self.master[keep_this] 3162 systems = np.array([system.split('.')[0] for system in toplot['photband']],str) 3163 set_systems = sorted(list(set(systems))) 3164 color_cycle = itertools.cycle([cmap_photometry(j) for j in np.linspace(0, 1.0, len(set_systems))]) 3165 for system in set_systems: 3166 color = color_cycle.next() 3167 keep = systems==system 3168 if sum(keep): 3169 pl.plot(toplot['_RAJ2000'][keep][0]*60, 3170 toplot['_DEJ2000'][keep][0]*60,'x',label=system, 3171 mew=2.5,ms=15,alpha=0.5,color=color) 3172 leg = pl.legend(numpoints=1,prop=dict(size='x-small'),loc='best',fancybox=True) 3173 leg.get_frame().set_alpha(0.75) 3174 pl.xlim(xlims) 3175 pl.ylim(ylims) 3176 pl.xlabel(r'Right ascension $\alpha$ [arcmin]') 3177 pl.ylabel(r'Declination $\delta$ [arcmin]') 3178 except: 3179 logger.warning('No image found of %s'%(self.ID)) 3180 pass 3181 3182 if 'pm' in self.info: 3183 logger.info("Found proper motion info") 3184 ppm_ra,ppm_de = (self.info['pm']['pmRA'],self.info['pm']['epmRA']),(self.info['pm']['pmDE'],self.info['pm']['epmDE']) 3185 pl.annotate('',xy=(ppm_ra[0]/50.,ppm_de[0]/50.), 3186 xycoords='data',xytext=(0,0),textcoords='data',color='red', 3187 arrowprops=dict(facecolor='red', shrink=0.05), 3188 horizontalalignment='right', verticalalignment='top') 3189 pl.annotate('pmRA: %.1f $\pm$ %.1f mas/yr\npmDE: %.1f $\pm$ %.1f mas/yr'%(ppm_ra[0],ppm_ra[1],ppm_de[0],ppm_de[1]), 3190 xy=(0.05,0.25),xycoords='axes fraction',color='red') 3191 if 'igrid_search' in self.results and 'd' in self.results['igrid_search']: 3192 (d,dprob) = self.results['igrid_search']['d'] 3193 max_distance = d[np.argmax(dprob)] 3194 e_max_distance = abs(max_distance - d[np.argmin(np.abs(dprob-0.5*max(dprob)))]) 3195 elif 'plx' in self.info and 'v' in self.info['plx'] and 'e' in self.info['plx']: 3196 plx,eplx = self.info['plx']['v'],self.info['plx']['e'] 3197 dist = 1000./ufloat((plx,eplx)) 3198 max_distance,e_max_distance = dist.nominal_value,dist.std_dev() 3199 else: 3200 max_distance = 1000. 3201 e_max_distance = 100. 3202 3203 tang_velo = 'Tan. vel. at %.0f+/-%.0f pc: '%(max_distance,e_max_distance) 3204 3205 max_distance = conversions.convert('pc','km',max_distance,e_max_distance) 3206 ppm_ra = conversions.convert('mas/yr','rad/s',*ppm_ra) 3207 ppm_de = conversions.convert('mas/yr','rad/s',*ppm_de) 3208 max_distance = unumpy.uarray([max_distance[0],max_distance[1]]) 3209 x = unumpy.uarray([ppm_ra[0],ppm_ra[1]]) 3210 y = unumpy.uarray([ppm_de[0],ppm_de[1]]) 3211 velocity = max_distance*utan( usqrt(x**2+y**2)) 3212 3213 3214 pl.annotate(tang_velo + '%s km/s'%(velocity),xy=(0.05,0.2),xycoords='axes fraction',color='red') 3215 if 'Vel' in self.info and 'v' in self.info['Vel']: 3216 rad_velo = 'Rad. vel.: %.1f'%(self.info['Vel']['v']) 3217 if 'e' in self.info['Vel']: 3218 rad_velo += '+/-%.1f'%(self.info['Vel']['e']) 3219 pl.annotate(rad_velo+' km/s',xy=(0.05,0.15),xycoords='axes fraction',color='red')
3220 3221
3222 - def make_plots(self):
3223 """ 3224 Make all available plots 3225 """ 3226 pl.figure(figsize=(22,12)) 3227 rows,cols = 3,4 3228 pl.subplots_adjust(left=0.04, bottom=0.07, right=0.97, top=0.96, 3229 wspace=0.17, hspace=0.24) 3230 pl.subplot(rows,cols,1);self.plot_grid(ptype='ci_red') 3231 pl.subplot(rows,cols,2);self.plot_grid(ptype='ebv') 3232 pl.subplot(rows,cols,3);self.plot_grid(ptype='z') 3233 pl.subplot(rows,cols,4);self.plot_distance() 3234 3235 pl.subplot(3,2,3);self.plot_sed(colors=False) 3236 pl.subplot(3,2,5);self.plot_sed(colors=True) 3237 3238 pl.subplot(rows,cols,7);self.plot_chi2(colors=False) 3239 pl.subplot(rows,cols,11);self.plot_chi2(colors=True) 3240 3241 #pl.subplot(rows,cols,8);self.plot_grid_model(ptype='prob') 3242 #pl.subplot(rows,cols,12);self.plot_grid_model(ptype='radii') 3243 3244 pl.figure(figsize=(12,12)) 3245 pl.axes([0,0.0,1.0,0.5]);self.plot_MW_side() 3246 pl.axes([0,0.5,0.5,0.5]);self.plot_MW_top() 3247 pl.axes([0.5,0.5,0.5,0.5]);self.plot_finderchart()
3248 3249 #} 3250 3251 #{Input and output 3252
3253 - def save_photometry(self,photfile=None):
3254 """ 3255 Save master photometry to a file. 3256 3257 @param photfile: name of the photfile. Defaults to C{starname.phot}. 3258 @type photfile: str 3259 """ 3260 #-- write to file 3261 if photfile is not None: 3262 self.photfile = photfile 3263 logger.info('Save photometry to file %s'%(self.photfile)) 3264 #-- add some comments 3265 if self.ID: 3266 if not 'bibcode' in self.master.dtype.names: 3267 self.master = crossmatch.add_bibcodes(self.master) 3268 if not 'comments' in self.master.dtype.names: 3269 self.master = vizier.quality_check(self.master,self.ID) 3270 ascii.write_array(self.master,self.photfile,header=True,auto_width=True,use_float='%g',comments=['#'+json.dumps(self.info)])
3271
3272 - def load_photometry(self,photfile=None):
3273 """ 3274 Load the contents of the photometry file to the master record. 3275 3276 @param photfile: name of the photfile. Defaults to the value of C{self.photfile}. 3277 @type photfile: str 3278 """ 3279 if photfile is not None: 3280 self.photfile = photfile 3281 logger.info('Load photometry from file %s'%(self.photfile)) 3282 self.master,comments = ascii.read2recarray(self.photfile,return_comments=True) 3283 self.info = json.loads(comments[-3])
3284 3285 3286
3287 - def save_fits(self,filename=None,overwrite=True):
3288 """ 3289 Save content of SED object to a FITS file. 3290 3291 The .fits file will contain the following extensions if they are present in the object: 3292 1) data (all previously collected photometric data = content of .phot file) 3293 2) model_igrid_search (full best model SED table from grid_search) 3294 3) igrid_search (CI from grid search = actual grid samples) 3295 4) synflux_igrid_search (integrated synthetic fluxes for best model from grid_search) 3296 5) model_imc (full best model SED table from monte carlo) 3297 6) imc (CI from monte carlo = actual monte carlo samples) 3298 7) synflux_imc (integrated synthetic fluxes for best model from monte carlo) 3299 3300 @param filename: name of SED FITS file 3301 @type filename: string 3302 @param overwrite: overwrite old FITS file if true 3303 @type overwrite: boolean 3304 3305 Example usage: 3306 3307 >>> #mysed.save_fits() 3308 >>> #mysed.save_fits(filename='myname.fits') 3309 """ 3310 if filename is None: 3311 filename = str(os.path.splitext(self.photfile)[0]+'.fits') 3312 if overwrite: 3313 if os.path.isfile(filename): 3314 os.remove(filename) 3315 logger.info('Old FITS file removed') 3316 3317 #-- write primary header 3318 #prim_header = {} 3319 #for key in self.info: 3320 #if not (isinstance(self.info[key],float) or isinstance(self.info[key],str)): 3321 #continue 3322 #prim_header[key] = self.info[key] 3323 #fits.write_recarray(np.array([[0]]),filename,header_dict=prim_header,ext=0) 3324 3325 #-- write master data 3326 master = self.master.copy() 3327 fits.write_recarray(master,filename,header_dict=dict(extname='data')) 3328 3329 #-- write the rest 3330 for mtype in self.results:#['igrid_search','imc']: 3331 eff_waves,synflux,photbands = self.results[mtype]['synflux'] 3332 chi2 = self.results[mtype]['chi2'] 3333 3334 results_modeldict = dict(extname='model_'+mtype) 3335 results_griddict = dict(extname=mtype) 3336 keys = sorted(self.results[mtype]) 3337 for key in keys: 3338 if 'CI' in key: 3339 for ikey in self.results[mtype][key]: 3340 if '_l' not in ikey and '_u' not in ikey and ikey != 'chisq': 3341 results_modeldict[ikey] = self.results[mtype][key][ikey] 3342 results_griddict[ikey] = self.results[mtype][key][ikey] 3343 if key=='factor': 3344 results_griddict[key] = self.results[mtype][key] 3345 3346 fits.write_array(list(self.results[mtype]['model']),filename, 3347 names=('wave','flux','dered_flux'), 3348 units=('AA','erg/s/cm2/AA','erg/s/cm2/AA'), 3349 header_dict=results_modeldict) 3350 if 'grid' in self.results[mtype]: 3351 fits.write_recarray(self.results[mtype]['grid'],filename,header_dict=results_griddict) 3352 3353 results = np.rec.fromarrays([synflux,eff_waves,chi2],dtype=[('synflux','f8'),('mod_eff_wave','f8'),('chi2','f8')]) 3354 3355 fits.write_recarray(results,filename,header_dict=dict(extname='synflux_'+mtype)) 3356 3357 logger.info('Results saved to FITS file: %s'%(filename))
3358 3359
3360 - def load_fits(self,filename=None):
3361 """ 3362 Load a previously made SED FITS file. Only works for SEDs saved with 3363 the save_fits function after 14.06.2012. 3364 3365 The .fits file can contain the following extensions: 3366 1) data (all previously collected photometric data = content of .phot file) 3367 2) model_igrid_search (full best model SED table from grid_search) 3368 3) igrid_search (CI from grid search = actual grid samples) 3369 4) synflux_igrid_search (integrated synthetic fluxes for best model from grid_search) 3370 5) model_imc (full best model SED table from monte carlo) 3371 6) imc (CI from monte carlo = actual monte carlo samples) 3372 7) synflux_imc (integrated synthetic fluxes for best model from monte carlo) 3373 3374 @param filename: name of SED FITS file 3375 @type filename: string 3376 @rtype: bool 3377 @return: true if Fits file could be loaded 3378 """ 3379 if filename is None: 3380 filename = os.path.splitext(self.photfile)[0]+'.fits' 3381 if not os.path.isfile(filename): 3382 logger.warning('No previous results saved to FITS file {:s}'.format(filename)) 3383 return False 3384 ff = pyfits.open(filename) 3385 3386 #-- observed photometry 3387 fields = ff['data'].columns.names 3388 master = np.rec.fromarrays([ff['data'].data.field(field) for field in fields],names=','.join(fields)) 3389 #-- remove the whitespace in columns with strings added by fromarrays 3390 for i,name in enumerate(master.dtype.names): 3391 if master.dtype[i].str.count('S'): 3392 for j,element in enumerate(master[name]): 3393 master[name][j] = element.strip() 3394 self.master = master 3395 3396 #-- add dictionary that will contain the results 3397 if not hasattr(self,'results'): 3398 self.results = {} 3399 3400 #-- grid search and MC results 3401 mtypes = [ext.header['extname'] for ext in ff[1:]] 3402 mtypes = list(set(mtypes) - set(['DATA'])) 3403 for mtype in mtypes: 3404 mtype = mtype.lower().lstrip('synflux_').lstrip('model_') 3405 if not mtype in self.results: 3406 self.results[mtype] = {} 3407 self.results[mtype]['model'] = np.array(ff['model_'+mtype].data.field('wave'),dtype='float64'),np.array(ff['model_'+mtype].data.field('flux'),dtype='float64'),np.array(ff['model_'+mtype].data.field('dered_flux'),dtype='float64') 3408 self.results[mtype]['chi2'] = np.array(ff['synflux_'+mtype].data.field('chi2'),dtype='float64') 3409 self.results[mtype]['synflux'] = np.array(ff['synflux_'+mtype].data.field('mod_eff_wave'),dtype='float64'),np.array(ff['synflux_'+mtype].data.field('synflux'),dtype='float64'),self.master['photband'] 3410 3411 3412 for mtype in ['igrid_search','iminimize','imc']: 3413 try: 3414 fields = ff[mtype].columns.names 3415 master = np.rec.fromarrays([ff[mtype].data.field(field) for field in fields],names=','.join(fields)) 3416 if not mtype in self.results: 3417 self.results[mtype] = {} 3418 self.results[mtype]['grid'] = master 3419 if 'factor' in ff[mtype].header: 3420 self.results[mtype]['factor'] = np.array([ff[mtype].header['factor']])[0] 3421 3422 headerkeys = ff[mtype].header.ascardlist().keys() 3423 for key in headerkeys[::-1]: 3424 for badkey in ['xtension','bitpix','naxis','pcount','gcount','tfields','ttype','tform','tunit','factor','extname']: 3425 if key.lower().count(badkey): 3426 headerkeys.remove(key) 3427 continue 3428 self.results[mtype]['CI'] = {} 3429 for key in headerkeys: 3430 #-- we want to have the same types as the original: numpy.float64 --> np.array([..])[0] 3431 self.results[mtype]['CI'][key.lower()] = np.array([ff[mtype].header[key]])[0] 3432 except KeyError: 3433 continue 3434 3435 #self.results['igrid_search'] = {} 3436 #fields = ff['igrid_search'].columns.names 3437 #master = np.rec.fromarrays([ff['igrid_search'].data.field(field) for field in fields],names=','.join(fields)) 3438 #self.results['igrid_search']['grid'] = master 3439 #self.results['igrid_search']['factor'] = ff['igrid_search'].header['factor'] 3440 3441 #self.results['model'] = ff[2].data.field('wave'),ff[2].data.field('flux'),ff[2].data.field('dered_flux') 3442 #self.results['chi2'] = ff[4].data.field('chi2') 3443 #self.results['synflux'] = ff[4].data.field('mod_eff_wave'),ff[4].data.field('synflux'),ff[1].data.field('photband') 3444 3445 ff.close() 3446 3447 logger.info('Loaded previous results from FITS file: %s'%(filename)) 3448 return filename
3449
3450 - def save_hdf5(self, filename=None, update=True):
3451 """ 3452 Save content of SED object to a HDF5 file. (HDF5 is the successor of FITS files, 3453 providing a clearer structure of the saved content.) 3454 This way of saving is more thorough that save_fits(), fx. the CI2D confidence 3455 intervals are not save to a fits file, but are saved to a hdf5 file. 3456 Currently the following data is saved to HDF5 file: 3457 - sed.master (photometry) 3458 - sed.results (results from all fitting methods) 3459 - sed.constraints (extra constraints on the fits) 3460 3461 @param filename: name of SED FITS file 3462 @type filename: string 3463 @param update: if True, an existing file will be updated with the current information, if 3464 False, an existing fill be overwritten 3465 @type update: bool 3466 @return: the name of the output HDF5 file. 3467 @rtype: string 3468 """ 3469 3470 if filename is None: 3471 filename = str(os.path.splitext(self.photfile)[0]+'.hdf5') 3472 3473 data = dict() 3474 data['master'] = self.master 3475 data['results'] = self.results 3476 data['constraints'] = self.constraints 3477 3478 hdf5.write_dict(data, filename, update=update) 3479 3480 logger.info('Results saved to HDF5 file: %s'%(filename)) 3481 return filename
3482
3483 - def load_hdf5(self,filename=None):
3484 """ 3485 Load a previously made SED from HDF5 file. 3486 3487 @param filename: name of SED FITS file 3488 @type filename: string 3489 @return: True if HDF5 file could be loaded 3490 @rtype: bool 3491 """ 3492 if filename is None: 3493 filename = os.path.splitext(self.photfile)[0]+'.hdf5' 3494 if not os.path.isfile(filename): 3495 logger.warning('No previous results saved to HFD5 file {:s}'.format(filename)) 3496 return False 3497 3498 data = hdf5.read2dict(filename) 3499 3500 self.master = data.get('master', {}) 3501 self.results = data.get('results', {}) 3502 self.constraints = data.get('constraints', {}) 3503 3504 logger.info('Loaded previous results from HDF5 file: %s'%(filename)) 3505 logger.debug('Loaded following datasets from HDF5 file:\n %s'%(data.keys())) 3506 return True
3507
3508 - def save_bibtex(self):
3509 """ 3510 Convert the bibcodes in a phot file to a bibtex file. 3511 3512 The first line in the bibtex file contains a \citet command citing 3513 all photometry. 3514 """ 3515 filename = os.path.splitext(self.photfile)[0]+'.bib' 3516 crossmatch.make_bibtex(self.master,filename=filename)
3517
3518 - def save_summary(self,filename=None,CI_limit=None,method='igrid_search',chi2type='ci_red'):
3519 """ 3520 Save a summary of the results to an ASCII file. 3521 """ 3522 #-- open the summary file to write the results 3523 if filename is None: 3524 filename = os.path.splitext(self.photfile)[0]+'.sum' 3525 3526 if CI_limit is None: 3527 CI_limit = self.CI_limit 3528 3529 #-- gather the results: 3530 grid_results = self.results[method]['grid'] 3531 start_CI = np.argmin(np.abs(grid_results[chi2type]-self.CI_limit)) 3532 factor = self.results[method]['factor'] 3533 names = ['factor','chi2_type','ci_limit'] 3534 results = [factor,chi2type,CI_limit*100] 3535 for name in grid_results.dtype.names: 3536 lv,cv,uv = grid_results[name][start_CI:].min(),\ 3537 grid_results[name][-1],\ 3538 grid_results[name][start_CI:].max() 3539 names += [name+'_l',name,name+'_u'] 3540 results += [lv,cv,uv] 3541 #-- write the used photometry to a file 3542 include_grid = self.master['include'] 3543 photbands = ":".join(self.master[include_grid]['photband']) 3544 references = ",".join(self.master[include_grid]['bibcode']) 3545 used_photometry = photometry2str(self.master[include_grid],comment='#') 3546 used_atmosphere = '#'+model.defaults2str()+'\n' 3547 used_photbands = '#'+photbands+'\n' 3548 used_references = '#'+references 3549 comments = used_photometry+used_atmosphere+used_photbands+used_references 3550 3551 contents = np.array([results]).T 3552 contents = np.rec.fromarrays(contents,names=names) 3553 ascii.write_array(contents,filename,auto_width=True,header=True, 3554 comments=comments.split('\n'),mode='a',use_float='%g') 3555 3556 logger.info('Saved summary to {0}'.format(filename))
3557
3558 - def ci2str(self, mtype='igrid_search'):
3559 """ 3560 Prints the stored confidence intervals of the given mtype, the confidence intervals 3561 are sorted alfabetically. 3562 3563 @param mtype: the search type for which to show the ci 3564 @type mtype: str 3565 @return: The confidence intervals in a string 3566 @rtype: str 3567 """ 3568 if not 'CI' in self.results[mtype]: 3569 res = "No confidence intervals saved for type: %s"%(mtype) 3570 return res 3571 else: 3572 res = "Stored confidence intervals for type: %s \n"%(mtype) 3573 3574 ci = self.results[mtype]['CI'] 3575 keys = [ key for key in ci.keys() if not '_' in key ] 3576 keys.sort() 3577 for key in keys: 3578 try: 3579 res += "%15s : %g <= %g <= %g\n"%(key, ci[key+'_l'], ci[key], ci[key+'_u']) 3580 except Exception: 3581 res += "%15s : nan <= %g <= nan\n"%(key, ci[key]) 3582 3583 return res.rstrip()
3584
3585 #} 3586 3587 -class BinarySED(SED):
3588
3589 - def __init__(self,ID=None,photfile=None,plx=None,load_fits=True,load_hdf5=True,label='', **kwargs):
3590 """ 3591 Setup the Binary sed in the same way as a normal SED. 3592 The masses of both components can be provided, and will then be used in igrid_search, 3593 iminimize, and while calculating CI and CI2D confidence intervalls 3594 """ 3595 super(BinarySED, self).__init__(ID=ID,photfile=photfile,plx=plx,label=label,\ 3596 load_fits=load_fits,load_hdf5=load_hdf5) 3597 3598 self.set_constraints(**kwargs)
3599 3600
3601 - def set_constraints(self, **kwargs):
3602 """ 3603 Add constraints that are used when fitting the Binary SED. Up till now the following 3604 contraints are supported: 3605 - masses (in Msol) 3606 - distance (in Rsol) 3607 TODO: This function should in the future accept Units. 3608 """ 3609 for key in kwargs.keys(): 3610 if kwargs[key] != None: 3611 self.constraints[key] = kwargs[key]
3612
3613 - def constraints2str(self):
3614 """ 3615 Summarizes all constraints in a string. 3616 """ 3617 res = "" 3618 for key in self.constraints.keys(): 3619 res += "Using constraint: %s = %s\n"%(key, self.constraints[key]) 3620 res = res[:-1] 3621 return res
3622
3623 - def igrid_search(self,points=100000,teffrange=None,loggrange=None,ebvrange=None,\ 3624 zrange=None,rvrange=((3.1,3.1),(3.1,3.1)),vradrange=((0,0),(0,0)),\ 3625 radrange=(None,None),compare=True,df=None,CI_limit=None,\ 3626 set_model=True, distance=None,**kwargs):
3627 """ 3628 Fit fundamental parameters using a (pre-integrated) grid search. 3629 3630 If called consecutively, the ranges will be set to the CI_limit of previous 3631 estimations, unless set explicitly. 3632 3633 If called for the first time, the ranges will be +/- np.inf by defaults, 3634 unless set explicitly. 3635 """ 3636 3637 if CI_limit is None or CI_limit > 1.0: 3638 CI_limit = self.CI_limit 3639 3640 #-- set defaults limits 3641 ranges = self.generate_ranges(teffrange=teffrange[0],\ 3642 loggrange=loggrange[0],ebvrange=ebvrange[0],\ 3643 zrange=zrange[0],rvrange=rvrange[0],vradrange=vradrange[0], 3644 radrange=radrange[0],teff2range=teffrange[1],\ 3645 logg2range=loggrange[1],ebv2range=ebvrange[1],\ 3646 z2range=zrange[1],rv2range=rvrange[1],vrad2range=vradrange[1], 3647 rad2range=radrange[1]) 3648 3649 #-- grid search on all include data: extract the best CHI2 3650 include_grid = self.master['include'] 3651 logger.info('The following measurements are included in the fitting process:\n%s'%\ 3652 (photometry2str(self.master[include_grid]))) 3653 3654 logger.info('The following constraints are included in the fitting process:\n%s'%\ 3655 (self.constraints2str())) 3656 3657 #-- build the grid, run over the grid and calculate the CHI2 3658 masses = self.constraints.get('masses', None) 3659 pars = fit.generate_grid_pix(self.master['photband'][include_grid], masses=masses, points=points, **ranges) 3660 3661 chisqs,scales,escales,lumis = fit.igrid_search_pix(self.master['cmeas'][include_grid], 3662 self.master['e_cmeas'][include_grid], 3663 self.master['photband'][include_grid], 3664 model_func=model.get_itable_pix,constraints=self.constraints, 3665 **pars) 3666 fitres = dict(chisq=chisqs, scale=scales, escale=escales, labs=lumis) 3667 3668 #-- collect all the results in a record array 3669 self.collect_results(grid=pars, fitresults=fitres, mtype='igrid_search') 3670 3671 #-- do the statistics 3672 self.calculate_statistics(df=df, ranges=ranges, mtype='igrid_search') 3673 3674 #-- compute the confidence intervals 3675 ci = self.calculate_confidence_intervals(mtype='igrid_search',chi2_type='red',\ 3676 CI_limit=CI_limit) 3677 self.store_confidence_intervals(mtype='igrid_search', **ci) 3678 3679 #-- remember the best model 3680 if set_model: self.set_best_model()
3681
3682 - def generate_fit_param(self, start_from='igrid_search', **pars):
3683 """ 3684 generates a dictionary with parameter information that can be handled by fit.iminimize 3685 """ 3686 masses = self.constraints.get('masses', None) 3687 result = super(BinarySED, self).generate_fit_param(start_from=start_from, **pars) 3688 3689 #-- Couple ebv and rv of both components 3690 result['ebv2_expr'] = 'ebv' 3691 result['rv2_expr'] = 'rv' 3692 3693 if masses != None: 3694 #-- Couple the radii to the masses 3695 G, Msol, Rsol = constants.GG_cgs, constants.Msol_cgs, constants.Rsol_cgs 3696 result['rad_value'] = np.sqrt(G*masses[0]*Msol/10**result['logg_value']) 3697 result['rad2_value'] = np.sqrt(G*masses[1]*Msol/10**result['logg2_value']) 3698 result['rad_expr'] = 'sqrt(%0.5f/10**logg) * 165.63560394542122'%(masses[0]) 3699 result['rad2_expr'] = 'sqrt(%0.5f/10**logg2) * 165.63560394542122'%(masses[1]) 3700 result['rad_min'], result['rad_max'] = None, None 3701 result['rad2_min'], result['rad2_max'] = None, None 3702 result['rad_vary'], result['rad2_vary'] = False, False 3703 else: 3704 result['rad_value'] = self.results[start_from]['CI']['rad'] 3705 result['rad2_value'] = self.results[start_from]['CI']['rad2'] 3706 3707 return result
3708
3709 - def iminimize(self, teff=(None,None), logg=(None,None), ebv=(None,None), z=(None,None), 3710 rv=(None,None), vrad=(0,0), teffrange=(None,None), loggrange=(None,None), 3711 ebvrange=(None,None), zrange=(None,None), rvrange=(None,None), 3712 vradrange=(None,None), radrange=(None,None), compare=True, df=None, 3713 distance=None, start_from='igrid_search', points=None, CI_limit=None, 3714 calc_ci=True, set_model=True, **kwargs):
3715 """ Binary minimizer """ 3716 3717 ranges = self.generate_ranges(teffrange=teffrange[0],\ 3718 loggrange=loggrange[0],ebvrange=ebvrange[0],\ 3719 zrange=zrange[0],rvrange=rvrange[0],vradrange=vradrange[0], 3720 radrange=radrange[0],teff2range=teffrange[1],\ 3721 logg2range=loggrange[1],ebv2range=ebvrange[1],\ 3722 z2range=zrange[1],rv2range=rvrange[1],vrad2range=vradrange[1], 3723 rad2range=radrange[1]) 3724 pars = self.generate_fit_param(teff=teff[0],logg=logg[0],ebv=ebv[0],z=z[0],\ 3725 rv=rv[0], vrad=vrad[0],teff2=teff[1],logg2=logg[1],\ 3726 ebv2=ebv[1],z2=z[1],rv2=rv[1],vrad2=vrad[1], \ 3727 start_from=start_from, **ranges) 3728 3729 #-- Print the used data and constraints 3730 include_grid = self.master['include'] 3731 logger.info('The following measurements are included in the fitting process:\n%s'%\ 3732 (photometry2str(self.master[include_grid]))) 3733 logger.info('The following constraints are included in the fitting process:\n%s'%\ 3734 (self.constraints2str())) 3735 3736 #-- pass all ranges and starting values to the fitter 3737 kick_list = ['teff', 'logg', 'teff2', 'logg2', 'ebv'] 3738 grid, chisq, nfev, scale, lumis = fit.iminimize(self.master['cmeas'][include_grid], 3739 self.master['e_cmeas'][include_grid], self.master['photband'][include_grid], 3740 points=points, kick_list=kick_list, constraints=self.constraints, **pars) 3741 3742 logger.info('Minimizer Succes with startpoints=%s, chi2=%s, nfev=%s'%(len(chisq), chisq[0], nfev[0])) 3743 #-- handle the results 3744 fitres = dict(chisq=chisq, nfev=nfev, scale=scale, labs=lumis) 3745 self.collect_results(grid=grid, fitresults=fitres, mtype='iminimize', selfact='chisq') 3746 3747 #-- Do the statistics 3748 self.calculate_statistics(df=5, ranges=ranges, mtype='iminimize', selfact='chisq') 3749 3750 #-- Store the confidence intervals 3751 ci = self._get_imin_ci(mtype='iminimize',**ranges) 3752 self.store_confidence_intervals(mtype='iminimize', **ci) 3753 if calc_ci: 3754 self.calculate_iminimize_CI(mtype='iminimize', CI_limit=CI_limit) 3755 3756 #-- remember the best model 3757 if set_model: self.set_best_model(mtype='iminimize')
3758 3759
3760 - def calculate_iminimize_CI(self, mtype='iminimize', CI_limit=0.66, **kwargs):
3761 """ 3762 Calculate the confidence intervals for each parameter using the lmfit 3763 calculate confidence interval method. 3764 3765 The calculated confidence intervals are stored in the results['CI'] 3766 dictionary. If the method fails, or if the asked CI is outside the 3767 provided ranges, those ranges will be set as CI. 3768 3769 This method works in the same way as for a single SED, but it adds 3770 the mass as an extra constraint if the mass of both components is stored. 3771 """ 3772 masses = self.constraints.get('masses',None) 3773 super(BinarySED, self).calculate_iminimize_CI(mtype=mtype, CI_limit=CI_limit,\ 3774 masses=masses, constraints=self.constraints,\ 3775 **kwargs)
3776
3777 - def calculate_iminimize_CI2D(self,xpar, ypar, mtype='iminimize', limits=None, res=10, **kwargs):
3778 """ 3779 Calculated 2 dimentional confidence intervals for the given parameters, 3780 using lmfit methods. 3781 3782 The calculated confidence intervals are stored in the results['CI2D'] 3783 dictionary. 3784 3785 This method works in the same way as for a single SED, but it adds 3786 the mass as an extra constraint if the mass of both components is stored. 3787 """ 3788 masses = self.constraints.get('masses',None) 3789 super(BinarySED, self).calculate_iminimize_CI2D(xpar, ypar, mtype=mtype,\ 3790 limits=limits, res=res, **kwargs) #,masses=masses)
3791
3792 - def set_best_model(self,mtype='igrid_search',law='fitzpatrick2004', **kwargs):
3793 """ 3794 Get reddenend and unreddened model 3795 """ 3796 logger.info('Interpolating approximate full SED of best model') 3797 3798 #-- synthetic flux 3799 include = self.master['include'] 3800 synflux = np.zeros(len(self.master['photband'])) 3801 keep = (self.master['cwave']<1.6e6) | np.isnan(self.master['cwave']) 3802 keep = keep & include 3803 3804 if mtype in ['igrid_search', 'iminimize']: 3805 scale = self.results[mtype]['CI']['scale'] 3806 3807 #-- get parameters 3808 pars, pars_ur = {}, {} 3809 for key in self.results[mtype]['CI'].keys(): 3810 if not key[-2:] == '_u' and not key[-2:] == '_l': 3811 pars[key] = self.results[mtype]['CI'][key] 3812 if not re.search('ebv\d?', key): 3813 pars_ur[key] = self.results[mtype]['CI'][key] 3814 else: 3815 pars_ur[key] = 0.0 3816 3817 #-- get (approximated) reddened and unreddened model 3818 wave,flux = model.get_table(**pars) 3819 wave_ur,flux_ur = model.get_table(**pars_ur) 3820 3821 #-- get synthetic photometry 3822 synflux_,pars = model.get_itable(photbands=self.master['photband'][keep], **pars) 3823 flux,flux_ur = flux*scale,flux_ur*scale 3824 3825 synflux[keep] = synflux_ 3826 3827 synflux[-self.master['color']] *= scale 3828 chi2 = (self.master['cmeas']-synflux)**2/self.master['e_cmeas']**2 3829 #-- calculate effective wavelengths of the photometric bands via the model 3830 # values 3831 eff_waves = filters.eff_wave(self.master['photband'],model=(wave,flux)) 3832 self.results[mtype]['model'] = wave,flux,flux_ur 3833 self.results[mtype]['synflux'] = eff_waves,synflux,self.master['photband'] 3834 self.results[mtype]['chi2'] = chi2
3835
3836 -class Calibrator(SED):
3837 """ 3838 Convenience class for a photometric standard star or calibrator. 3839 """
3840 - def __init__(self,ID=None,photfile=None,plx=None,load_fits=False,label='',library='calspec'):
3841 names,fits_files,phot_files = model.read_calibrator_info(library=library) 3842 index = names.index(ID) 3843 #--retrieve fitsfile information 3844 if library in ['ngsl','calspec']: 3845 fits_file = pyfits.open(fits_files[index]) 3846 wave = fits_file[1].data.field('wavelength') 3847 flux = fits_file[1].data.field('flux') 3848 fits_file.close() 3849 elif library=='stelib': 3850 wave,flux = fits.read_spectrum(fits_files[index]) 3851 #--photfile: 3852 photfile = phot_files[index] 3853 super(Calibrator,self).__init__(photfile=photfile,plx=plx,load_fits=load_fits,label=label) 3854 self.set_model(wave,flux)
3855
3856 3857 -class SampleSEDs(object):
3858 """ 3859 Class representing a list of SEDs. 3860 """
3861 - def __init__(self,targets,**kwargs):
3862 """ 3863 Initialize a sample. 3864 3865 This can be done either with a list of IDs or with a list of SED 3866 instances. The SED must exist! That is, for each ID, there must be a 3867 phot or FITS file. 3868 """ 3869 #-- we don't want to load the FITS files by default because they 3870 # can eat up memory 3871 kwargs.setdefault('load_fits',False) 3872 self.targets = [] 3873 self.seds = [] 3874 #-- create all SEDs 3875 for target in targets: 3876 #-- perhaps we gave an ID: in that case, create the SED instance 3877 if not isinstance(target,SED): 3878 mysed = SED(target,**kwargs) 3879 if not mysed.has_photfile(): 3880 raise ValueError("No phot file found for {}".format(target)) 3881 #-- perhaps already an SED object: then do nothing 3882 else: 3883 mysed = target 3884 self.seds.append(mysed) 3885 self.targets.append(mysed.ID) 3886 logger.info("Loaded {} SEDs".format(len(self.seds)))
3887
3888 - def __iter__(self):
3889 """ 3890 Allow iteration over the SED instances. 3891 """ 3892 for sed in self.seds: 3893 yield sed
3894
3895 - def __len__(self):
3896 """ 3897 The length of a SampleSEDs instance is the number SED instances. 3898 """ 3899 return len(self.seds)
3900
3901 - def __getitem__(self,key):
3902 """ 3903 Implements various ways to get individual seds. 3904 3905 Allows integer indexing, slicing, indexing with integer and boolean arrays. 3906 """ 3907 #-- via slicing 3908 if isinstance(key,slice): 3909 return SampleSEDs([self[ii] for ii in range(*key.indices(len(self)))]) 3910 #-- via an integer 3911 elif isinstance(key,int): 3912 return self.seds[key] 3913 else: 3914 #-- try to make the input an array 3915 try: 3916 key = np.array(key) 3917 except: 3918 raise TypeError("Cannot use instance of type {} for indexing".format(type(key))) 3919 #-- integer array slicing 3920 if key.dtype==np.dtype(int): 3921 return SampleSEDs([self[ii] for ii in key]) 3922 #-- boolean array slicing 3923 elif key.dtype==np.dtype(bool): 3924 return SampleSEDs([self[ii] for ii in range(len(key)) if key[ii]]) 3925 #-- that's all I can come up with 3926 else: 3927 raise TypeError("Cannot use arrays of type {} for indexing".format(key.dtype))
3928
3929 - def summarize(self):
3930 #-- collect the names of all the different sources 3931 sources = {} 3932 for sed in self.seds: 3933 #-- collect the different sources 3934 these_sources = list(set(sed.master['source'])) 3935 for source in these_sources: 3936 if not source in sources: 3937 sources[source] = [] 3938 #-- now for each source, collect the names of the passbands 3939 keep = sed.master['source']==source 3940 sources[source] += list(set(sed.master[keep]['photband'])) 3941 sources[source] = sorted(list(set(sources[source]))) 3942 #-- next step: for each source, create a record array where the columns 3943 # are the different photbands. Fill in the values for the photbands 3944 # for each target when possible. 3945 summary = [] 3946 source_names = sorted(sources.keys()) 3947 for source in source_names: 3948 #-- create the record array 3949 data = np.zeros((len(self.targets),2*len(sources[source]))) 3950 # but remember to have errors with the photbands 3951 names = [] 3952 for photband in sources[source]: 3953 names += [photband,'e_'+photband] 3954 data = np.rec.fromarrays(data.T,names=names) 3955 #-- fill in the values 3956 for i,sed in enumerate(self.seds): 3957 for photband in sources[source]: 3958 keep = (sed.master['source']==source) & (sed.master['photband']==photband) 3959 #-- fill in nans for value and error when not present 3960 if not sum(keep): 3961 data[photband][i] = np.nan 3962 data['e_'+photband][i] = np.nan 3963 #-- otherwise give the first value you've found 3964 else: 3965 data[photband][i] = sed.master[keep]['cmeas'][0] 3966 data['e_'+photband][i] = sed.master[keep]['e_cmeas'][0] 3967 #if not sum(keep): logger.warning('multiple defined photband ({}) and source ({})'.format(photband,source)) 3968 summary.append(data) 3969 dtypes = [(source,summary[i].dtype) for i,source in enumerate(source_names)] 3970 output = np.zeros(len(self.targets),dtype=np.dtype(dtypes)) 3971 for i,name in enumerate(output.dtype.names): 3972 output[name] = summary[i] 3973 return output
3974
3975 - def get_data(self,source,photband,label=None):
3976 """ 3977 Get all data on a particular passband from a particular source. 3978 3979 If label is not None, synthetic flux from a model will be added to the 3980 columns. 3981 """ 3982 records = [] 3983 if label is not None: 3984 synflux_label = [('synflux','f8')] 3985 else: 3986 synflux_label = [] 3987 for targetname,sed in zip(self.targets,self.seds): 3988 #-- for the source, collect the names of the passbands 3989 keep = (sed.master['source']==source) & (sed.master['photband']==photband) 3990 #-- is there a model SED for which we want to retrieve synthetic photometry? 3991 if label is not None: 3992 if sum(keep)==0: 3993 synflux = [0] 3994 else: 3995 synflux = [sed.results[label]['synflux'][1][keep][0]] 3996 else: 3997 synflux = [] 3998 #-- if no data on the matter is present, put zero values everywhere 3999 if sum(keep)==0: 4000 records.append([targetname]+list(np.zeros(len(sed.master.dtype.names)))+synflux) 4001 else: 4002 records.append([targetname]+list(sed.master[keep][0])+synflux) 4003 #-- make the thing into a record array 4004 dtypes = np.dtype([('targetname','|S25')] + sed.master.dtype.descr + synflux_label) 4005 output = np.rec.fromrecords(records,names=dtypes.names) 4006 output = np.array(output,dtype=dtypes) 4007 return output
4008
4009 - def get_confidence_interval(self,parameter='teff',mtype='igrid_search'):
4010 values = np.zeros((len(self),3)) 4011 for i,sed in enumerate(self): 4012 sed.load_fits() 4013 values[i,0] = sed.results[mtype]['CI'][parameter+'_l'] 4014 values[i,1] = sed.results[mtype]['CI'][parameter] 4015 values[i,2] = sed.results[mtype]['CI'][parameter+'_u'] 4016 sed.clear() 4017 return values
4018 4019 4020 4021 4022 if __name__ == "__main__": 4023 import sys 4024 import doctest 4025 import pprint 4026 from cc.ivs.aux import loggers 4027 4028 if not sys.argv[1:]: 4029 doctest.testmod() 4030 pl.show() 4031 else: 4032 name = " ".join([string for string in sys.argv[1:] if not '=' in string]) 4033 units = [string.split('=')[1] for string in sys.argv[1:] if 'units=' in string] 4034 if not units: 4035 units = 'erg/s/cm2/AA' 4036 else: 4037 units = units[0] 4038 logger = loggers.get_basic_logger("") 4039 mysed = SED(name) 4040 pprint.PrettyPrinter(indent=4).pprint(mysed.info) 4041 mysed.get_photometry(units=units) 4042 mysed.plot_data() 4043 pl.show() 4044 answer = raw_input('Keep photometry file %s (y/N)'%(mysed.photfile)) 4045 if not 'y' in answer.lower(): 4046 os.unlink(mysed.photfile) 4047 logger.info('Removed %s'%(mysed.photfile)) 4048 raise SystemExit 4049 4050 #-- clean up 4051 if os.path.isfile('HD180642.fits'): 4052 os.remove('HD180642.fits') 4053 raise SystemExit 4054 #-- PCA analysis 4055 master['include'] = True 4056 exclude(master,names=['STROMGREN.HBN-HBW','USNOB1'],wrange=(1.5e4,np.inf)) 4057 do_pca = False 4058 include_pca = master['include'] 4059 4060 if sum(keep)>2: 4061 do_pca = True 4062 logger.info("Start of PCA analysis to find fundamental parameters") 4063 colors,index = np.unique1d(master['photband'][include_pca],return_index=True) 4064 A,grid = fit.get_PCA_grid(colors,ebvrange=(0,0.5),res=10) 4065 P,T,(means,stds) = fit.get_PCA(A) 4066 calib = fit.calibrate_PCA(T,grid,function='linear') 4067 obsT,e_obsT = master['cmeas'][include_pca][index], master['e_cmeas'][include_pca][index] 4068 pars = fit.get_PCA_parameters(obsT,calib,P,means,stds,e_obsT=e_obsT,mc=mc) 4069 teff,logg,ebv = pars[0] 4070 logger.info("PCA result: teff=%.0f, logg=%.2f, E(B-V)=%.3f"%(teff,logg,ebv)) 4071 4072 #-- find angular diameter 4073 logger.info('Estimation of angular diameter') 4074 iflux = model.get_itable(teff=teff,logg=logg,ebv=ebv,photbands=master['photband'][include_grid]) 4075 scale_pca = np.median(master['cmeas'][include_grid]/iflux) 4076 angdiam = 2*np.arctan(np.sqrt(scale_pca))/np.pi*180*3600*1000 4077 logger.info('Angular diameter = %.3f mas'%(angdiam)) 4078 4079 #-- get model 4080 wave_pca,flux_pca = model.get_table(teff=teff,logg=logg,ebv=ebv,law='fitzpatrick1999') 4081 wave_ur_pca,flux_ur_pca = model.get_table(teff=teff,logg=logg,ebv=0,law='fitzpatrick1999') 4082 4083 logger.info('...brought to you in %.3fs'%(time.time()-c0)) 4084 4085 pl.figure() 4086 pl.title(ID) 4087 toplot = master[-master['color']] 4088 systems = np.array([system.split('.')[0] for system in toplot['photband']],str) 4089 set_systems = sorted(list(set(systems))) 4090 pl.gca().set_color_cycle([pl.cm.spectral(i) for i in np.linspace(0, 1.0, len(set_systems))]) 4091 for system in set_systems: 4092 keep = systems==system 4093 pl.errorbar(master['cwave'][keep],master['cmeas'][keep], 4094 yerr=master['e_cmeas'][keep],fmt='o',label=system) 4095 pl.gca().set_xscale('log',nonposx='clip') 4096 pl.gca().set_yscale('log',nonposy='clip') 4097 pl.gca().set_autoscale_on(False) 4098 pl.plot(wave_pca,flux_pca*scale_pca,'r-') 4099 pl.plot(wave_ur_pca,flux_ur_pca*scale_pca,'k-') 4100 pl.grid() 4101 pl.legend(loc='lower left',prop=dict(size='x-small')) 4102 pl.annotate('Teff=%d K\nlogg=%.2f cgs\nE(B-V)=%.3f mag\n'%(teff,logg,ebv)+r'$\theta$=%.3f mas'%(angdiam),(0.75,0.80),xycoords='axes fraction') 4103 pl.xlabel('wavelength [$\mu$m]') 4104 pl.ylabel(r'$F_\nu$ [Jy]') 4105 #### -- END SOME FIGURES -- #### 4106 4107 4108 if mc and do_pca: 4109 pl.figure() 4110 pl.subplot(131) 4111 pl.hexbin(pars[:,0],pars[:,1]) 4112 pl.xlim(pars[:,0].max(),pars[:,0].min()) 4113 pl.ylim(pars[:,1].max(),pars[:,1].min()) 4114 pl.colorbar() 4115 pl.subplot(132) 4116 pl.hexbin(pars[:,0],pars[:,2]) 4117 pl.xlim(pars[:,0].max(),pars[:,0].min()) 4118 pl.ylim(pars[:,2].min(),pars[:,2].max()) 4119 pl.colorbar() 4120 pl.subplot(133) 4121 pl.hexbin(pars[:,1],pars[:,2]) 4122 pl.xlim(pars[:,1].max(),pars[:,1].min()) 4123 pl.ylim(pars[:,2].min(),pars[:,2].max()) 4124 pl.colorbar() 4125 pl.show() 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 ################################ 4137