Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Efficient symbolic memory modeling via z3's lambda-arrays #329

Open
LeventErkok opened this issue Apr 17, 2023 · 3 comments
Open

Efficient symbolic memory modeling via z3's lambda-arrays #329

LeventErkok opened this issue Apr 17, 2023 · 3 comments
Labels
performance Issues related to runtime or simulation-time performance symbolic-execution Issues relating to macaw-symbolic and symbolic execution

Comments

@LeventErkok
Copy link

LeventErkok commented Apr 17, 2023

This is not an issue, but rather a comment on https://galois.com/blog/2023/03/making-a-scalable-smt-based-machine-code-memory-model/

(I'd have put a comment on the blog, but that doesn't seem possible. In any case, GitHub is probably better suited for this discussion anyhow.)

Cool blog! It reminded me about some of the more recent advances in z3 on how large symbolic memories can be modeled. As you mention, even though the memory is excessively large, any given program only accesses a small part of it, at least for most programs of interest. To address these concerns, z3 now allows functions (expressed as lambdas) to be used as arrays. This makes a lot of sense: A memory is nothing but a mapping from addresses to values, which is exactly what a function is. With some internal solver magic, a lot of the clunky SMT-array updates can be avoided.

I've also recently added support for this in SBV, with the function lambdaAsArray. The idea is that you create a lambda-term (which is a z3 extension, I don't think other solvers support it quite yet), encoding the memory as a function.

Here's an example I coded up that should illustrate the point:

module Mem where

import Control.Monad (unless)

import Data.SBV
import Data.SBV.Control

-- Idealized memory model: infinitely addressable, storing arbitrary sized integers
type Address = Integer
type Addr    = SBV Address

-- Note that this memory is symbolically addressable, and it stores
-- symbolic integers. Historically, SArray takes base types as arguments, causing much confusion!
type Memory = SArray Address Integer

-- initial memory. Traditionally, it's all 0's; but for demo purposes of how we can
-- initialize it with arbitrary "functional" contents, we'll initialize each cell with
-- its own address
initMem :: Memory
initMem = lambdaAsArray id

-- Idealized program: It's just a memory, with some given length that we care about,
-- program is assumed to lie from 0 to len-1 in this memory
data Pgm = Pgm { len  :: Integer
               , pgm  :: Memory
               }

-- Load a "program" at given address
load :: Memory -> Pgm -> Addr -> Memory
load mem Pgm{len, pgm} startAddr = lambdaAsArray updated
  where updated idx = ite (idx .>= startAddr .&& idx .< startAddr + literal len)
                          (readArray pgm (idx - startAddr))  -- in the new program space
                          (readArray mem idx)                -- from the old memory

-- Example program; 20-words long. Say it maps every program byte to its address + 10
-- again constructed from a regular Haskell function
testPgm :: Pgm
testPgm = Pgm { len = 20
              , pgm = lambdaAsArray (+10)
              }

-- A simple theorem
example1 :: IO ThmResult
example1 = proveWith z3{verbose=True} $ readArray loaded 18 .== 16
  where -- start-up
        reset  = initMem

        -- load our progam at address 12. So it occupies addresses 12..31
        -- mapping each of those to 10..29, i.e., two below its address
        loaded = load reset testPgm 12

-- Query example
example2 :: IO ()
example2 = runSMTWith z3 $ do
    -- pick start-address >= 100, and load our program into it
    startAddr <- free "startAddr"
    constrain $ startAddr .>= 100

    let mem = load initMem testPgm startAddr

    -- This is a bit annoying; but since the above definition of mem
    -- is in a let, SBV has no way of communicating that to the solver
    -- before the query starts. So, add a dummy constraint on it.
    -- In a real problem, you'd have enough constraints around, so you
    -- wouldn't need this.
    constrain $ readArray mem 0 .== readArray mem 0

    query $ do
      cs <- checkSat

      case cs of
        Sat -> do
          start <- getValue startAddr
          io $ putStrLn $ "Got start: " ++ show start

          -- get the value stored at location startAddr + 5. We expect to get 15
          -- since it's in the program
          v1 <- getValue (readArray mem (startAddr + 5))
          io $ putStrLn $ "Expecting 15, got: " ++ show v1
          unless (v1 == 15) $ error "buggy!"

          -- get the value stored at location 50. We expect to get startAddr + 50
          -- since it's out of the loaded program
          v2 <- getValue (readArray mem (startAddr + 50))
          io $ putStrLn $ "Expecting start + 50, got: " ++ show v2
          unless (v2 == start + 50) $ error "buggy!"

        _ -> error $ "Solver said: " ++ show cs

(Note that you need sbv 10.1 to have this example run, which just got released on Hackage a few days ago.)

I'll let you look into the details, but the crucial point is we get to initialize the memory to not just the typical "all 0", but with a function that stores an arbitrary value in any address, which can be a function of that address. (See the definition of initMem.) If you were doing this with a regular array, you'd have an infinite conjunction, or you'd have to use a quantifier; neither of which play all that well. But with a lambda-term, it behaves quite nicely. You can run example1 to see what a small foot-print it has as it communicates to the solver; the trick is revealed in the SMT-Lib line:

(define-fun array_1 () (Array Int Int) (lambda ((l2_s0 Int))
         l2_s0))

which shows the lambda extension allowed by z3.

In example2, we do load a program into this memory, and do some queries on it. Again, the footprint is quite small. You can see it by running it in the verbose mode. (runSMTWith z3{verbose=True} ...)

Anyhow. I wanted to share this cool z3 feature with you. It'd be interesting to see if you can incorporate this idea to Macaw and see how it fares against your solution. Of course, when lambda terms are around, z3 no longer guarantees decidability: You can get unknown as an answer (or worse, loop forever); but I think if you use it in this way, it should get you far. (They also allow lambdas with sequence-folding etc., which can land you into the land of undecidability.) I'd be interested in hearing what you find out, if you play around with this.

Great to see Galois is pushing the frontiers on this sort of analysis..

@RyanGlScott RyanGlScott added performance Issues related to runtime or simulation-time performance symbolic-execution Issues relating to macaw-symbolic and symbolic execution labels Apr 17, 2023
@RyanGlScott
Copy link
Contributor

Thanks for the pointer! I was vaguely aware of lambda in the context of Yices, but I wasn't aware that Z3 supported it as well—neat!

I'm glad that we wrote up that post, as it has revealed that we are likely still leaving performance on the table. There are two other alternative memory models that we should try:

  1. Back global memory with a symbolic array defined by lambda term, as your propose above.
  2. Don't back global memory with a symbolic array at all. Instead, create several SMT variables and define a mapping (on the Haskell side) from address values to SMT variables. Such as thing is possible with What4's arrayFromFn, for example. (Despite the name, arrayFromFn doesn't actually use an SMT array under the hood.)

Both of these options would be appealing because they could be eagerly constructed before the start of simulation, which would avoid the extra bookkeeping of the lazy macaw-symbolic memory model. We should actually benchmark these to see if they are indeed faster, of course.

@LeventErkok
Copy link
Author

SBV used to have what I called "functional" arrays. Which were simply (psuedo-code):

type A a b = a -> b

read  f a      = f a
write f a v a' = ite (a' .== a) v (f a')

Note how this keeps everything on the Haskell side. But they had terrible performance compared to SMTLib arrays: Once you did a small number of write's, it built up this huge chain of ite calls, which caused a ton of split points. I tried to optimize it by keeping a cache on the Haskell side to speed-up concrete address read/writes, but it got too complicated and wasn't worth it. I removed it all and now exclusively use SMTLib arrays as they have the best performance, and lambdaAsArray gives the flexibility of using a function as is.

I suspect What4's arrayFromFn might be working in a similar way. I'd be surprised if it performed well. But maybe you have some other tricks to make it go fast.

@RyanGlScott
Copy link
Contributor

This is useful data to have, thanks! Indeed, I suspect that arrayFromFn would likely not be as fast overall due to the sheer number of ites involved, but it's hard to say without actually trying it. (My intuition about SMT solver performance rarely turns out to be correct.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Issues related to runtime or simulation-time performance symbolic-execution Issues relating to macaw-symbolic and symbolic execution
Projects
None yet
Development

No branches or pull requests

2 participants