Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add carma to gp modeling [WIP] #767

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft

Conversation

dhuppenkothen
Copy link
Member

Draft pull request to add the CARMA kernel to the gpmodeling class.

Also introduces some minor fixes and additions:

  • the GP can now also take into account data uncertainties
  • added a posterior predictive modeling function

@Gaurav17Joshi If you have a minute, I would love for you to take a look at this. The CARMA kernel takes an array of parameters by default, and I've struggled with making this work, so if you have suggestions for improvement, those would be very welcome!

@pep8speaks
Copy link

pep8speaks commented Oct 5, 2023

Hello @dhuppenkothen! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 608:45: E203 whitespace before ':'
Line 611:63: E203 whitespace before ':'
Line 614:44: E203 whitespace before ':'
Line 617:62: E203 whitespace before ':'
Line 1079:12: E713 test for membership should be 'not in'

Comment last updated at 2023-10-05 07:47:54 UTC

@codecov
Copy link

codecov bot commented Oct 5, 2023

Codecov Report

Merging #767 (b193796) into main (34e6a94) will decrease coverage by 7.98%.
The diff coverage is 4.10%.

@@            Coverage Diff             @@
##             main     #767      +/-   ##
==========================================
- Coverage   97.29%   89.32%   -7.98%     
==========================================
  Files          43       43              
  Lines        7918     8045     +127     
==========================================
- Hits         7704     7186     -518     
- Misses        214      859     +645     
Files Coverage Δ
stingray/modeling/gpmodeling.py 12.37% <4.10%> (-83.42%) ⬇️

... and 13 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more



def get_kernel(kernel_type, kernel_params):
def get_kernel(kernel_type, kernel_params, p=1, q=0):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What range of values of p and q will be used by the user?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

p and q are user, defined, p will be in the range probably somewhere ~1-10, and similarly q in the range ~0-10, though larger numbers are theoretically possible (but would be very hard to model). An important constraint is that p>=q

parameter = yield prior_dict[i]
for param in params_list:
if param == "log_alpha":
for j in range(p):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is log_alpha here a single parameter, or is it a list or array of parameters, if so what is its size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log_alpha is an array of size p, log_beta an array of size q

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method seems fine, but some suggestions:

  1. Here all the log_alpha's will have the same prior distribution. Is this intended?
  2. Instead of checking for params == "log_alpha" at each instance, we may want to just check if the parameter the user is giving is an array and do its part separately. For example usually the dictionary we insert to the get_prior function is like
prior_dict = {
   "log_alpha": tfpd.Uniform(low = jnp.log(1e-1), high = jnp.log(2e+2)),
   "t0": tfpd.Uniform(low = 0.0 - 0.1, high = 1 + 0.1),
   }

If we want log_alpha, to be actually be an array parameter of size p, we can change the dictionary to

prior_dict = {
   "log_alpha": [tfpd.Uniform(low = jnp.log(1e-1), high = jnp.log(2e+2)),p],
   "t0": tfpd.Uniform(low = 0.0 - 0.1, high = 1 + 0.1),
   }

and the corresponding modifications to the get_prior function's prior_model could be:

def prior_model():
   prior_list = []
   for i in params_list:
      # New code added for array parameters
      if isinstance(prior_dict[i], list):
         for j in range(prior_dict[i][1]):
            if instance(prior_dict[param][0], tfpd.Distribution):
               parameter = yield Prior(prior_dict[param][0], name = param + str(j))
            elif isinstance(prior_dict[param][0], Prior):
               parameter = yield prior_dict[param]
            else:
               raise ValueError("Prior should be a tfpd distribution or a jaxns prior.")
         prior_list.append(parameter)
      # Old code
      elif isinstance(prior_dict[i], tfpd.Distribution):
         parameter = yield Prior(prior_dict[i], name=i)
         prior_list.append(parameter)
      elif isinstance(prior_dict[i], Prior):
         parameter = yield prior_dict[i]
         prior_list.append(parameter)
      else:
         raise ValueError("Prior should be a tfpd distribution or a jaxns prior.")
   return tuple(prior_list)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants