https://sarabander.github.io/sicp/html/1_002e3.xhtml

Directory

Section 1.3

The following are equivalent:

(define (plus4 x) (+ x 4))
(define plus4 (lambda (x) (+ x 4)))

I find the example extremely instructive:

$$f(x,y)=x(1+xy)^2+y(1-y)+(1+xy)(1-y)$$ We probably want to only calculate $1+xy$ and $1-y$ once. So,

(define (f x y)
  (define (f-helper a b)
    (+ (* x (square a))
       (* y b)
       (* a b)))
  (f-helper (+ 1 (* x y)) 
            (- 1 y)))
;; =>
(define (f x y)
  ((lambda (a b)
     (+ (* x (square a)) 
        (* y b) 
        (* a b)))
   (+ 1 (* x y))
   (- 1 y)))
;; =>
(define (f x y)
  (let ((a (+ 1 (* x y)))
        (b (- 1 y)))
    (+ (* x (square a))
       (* y b)
       (* a b))))

In short,

(let ((⟨var₁⟩ ⟨exp₁⟩)
      (⟨var₂⟩ ⟨exp₂⟩)
      
      (⟨varₙ⟩ ⟨expₙ⟩))
  ⟨body⟩)
;; is alternative syntax for
((lambda (⟨var₁⟩  ⟨varₙ⟩)
   ⟨body⟩)
 ⟨exp₁⟩
 
 ⟨expₙ⟩)

... 1.3.3 just talks about boring stuff: fixed point function, interval search, fixed point with averaging.

... 1.3.4 just defines numerical stuff, average-damp, newtons-method (using numerical differentiation).

Meeting 03-23-2025

Exercises

Exercise 1.29

Simpson's Rule is a more accurate method of numerical integration than the method illustrated above. Using Simpson's Rule, the integral of a function $f$ between $a$ and $b$ is approximated as

$$\frac{h}{3} \left(y_0 + 4y_1 + 2y_2 + 4y_3 + 2y_4 + \ldots + 2y_{n-2} + 4y_{n-1} + y_n\right)$$ where $h = (b - a)/n$, for some even integer $n$, and $y_k = f(a + kh)$. (Increasing $n$ increases the accuracy of the approximation.) Define a procedure that takes as arguments $f,$ $a,$ $b,$ and $n$ and returns the value of the integral, computed using Simpson's Rule. Use your procedure to integrate cube between 0 and 1 (with $n = 100$ and $n = 1000$), and compare the results to those of the integral procedure shown above.

Solution
#lang sicp

(define (sum term a next b)
  (if (> a b)
      0
      (+ (term a)
         (sum term (next a) next b))))

(define (integral f a b dx)
  (define (add-dx x) (+ x dx))
  (* (sum f (+ a (/ dx 2.0)) add-dx b) 
     dx))



(define (simp-integral f a b npts) 
  (define dx (* (- b a) (/ 2 npts)))
  (define (term-func x)
    (+ (f (- x (/ dx 2.0))) (* 4.0 (f x)) (f (+ x (/ dx 2.0)))))
  (define (add-dx x) (+ x dx))
  (* (sum term-func (+ a (/ dx 2.0)) add-dx b) 
     (/ dx 6.0) ))

(define (cube x) (* x x x))

(integral cube 0 1 0.01)
(integral cube 0 1 0.001)

(simp-integral cube 0 1 2)
(simp-integral cube 0 1 6)
(simp-integral cube 0 1 10)
(simp-integral cube 0 1 100)
Output:
0.24998750000000042
0.249999875000001
0.25
0.2499999999999999
0.24999999999999994
0.25000000000000044

A quick google search shows that Simpson's rule is an exact quadrature (it gives exact results for polynomials of degree 3 or less), so this is expected.

NB: the generalization of quadrature rules that give exact results for all polynomials of degree $2n-1$ is given by Gaussian quadrature.

Exercise 1.30

The sum procedure above generates a linear recursion. The procedure can be rewritten so that the sum is performed iteratively. Show how to do this by filling in the missing expressions in the following definition:

(define (sum term a next b)
  (define (iter a result)
    (if ⟨??⟩
        ⟨??⟩
        (iter ⟨??⟩ ⟨??⟩)))
  (iter ⟨??⟩ ⟨??⟩))
Solution
#lang sicp

; First, repeat the algorithm in the book
(define (sum term a next b)
  (if (> a b)
      0
      (+ (term a)
         (sum term (next a) next b))))

(define (pi-sum a b)
  (define (pi-term x)
    (/ 1.0 (* x (+ x 2))))
  (define (pi-next x)
    (+ x 4))
  (sum pi-term a pi-next b))

(* 8 (pi-sum 1 1000))

; reimplementing it iteratively
(define (sum-iter term a next b)
  (define (iter a result)
    (if (> a b)
        result
        (iter (next a) (+ result (term a)))))
  (iter a 0))

(define (pi-sum-iter a b)
  (define (pi-term x)
    (/ 1.0 (* x (+ x 2))))
  (define (pi-next x)
    (+ x 4))
  (sum-iter pi-term a pi-next b))

(* 8 (pi-sum-iter 1 1000))
Output:
3.139592655589783
3.139592655589782

important footnote:

The intent of Exercise 1.31 through Exercise 1.33 is to demonstrate the expressive power that is attained by using an appropriate abstraction to consolidate many seemingly disparate operations. However, though accumulation and filtering are elegant ideas, our hands are somewhat tied in using them at this point since we do not yet have data structures to provide suitable means of combination for these abstractions. We will return to these ideas in 2.2.3 when we show how to use sequences as interfaces for combining filters and accumulators to build even more powerful abstractions. We will see there how these methods really come into their own as a powerful and elegant approach to designing programs.

Exercise 1.31

  1. The sum procedure is only the simplest of a vast number of similar abstractions that can be captured as higher-order procedures. Write an analogous procedure called product that returns the product of the values of a function at points over a given range. Show how to define factorial in terms of product. Also use product to compute approximations to $\pi$ using the formula $$\frac\pi4 \,=\, \frac{2\cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdot\cdots}{3\cdot 3\cdot 5\cdot 5\cdot 7\cdot 7\cdot\cdots}.$$

  2. If your product procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process.

Solution
#lang sicp

; Helper functions
(define (id x) x)
(define (next x) (+ x 1))
(define (square x) (* x x))
(define (next2 x) (+ x 2))

; product defns
(define (product term a next b)
  (if (> a b)
      1
      (* (term a)
         (product term (next a) next b))))
(define (product-iter term a next b)
  (define (iter a result)
    (if (> a b)
        result
        (iter (next a) (* result (term a)))))
  (iter a 1))

; Implement factorial and wallis product.
(define (fact a)
  (product id 1 next a))
(define (fact-iter a)
  (product-iter id 1 next a))
(define (wallis-term x) (/ (* x (+ x 2)) (square (+ x 1))))
(define (wallis-pi nterms)
  (* 4 (product wallis-term 2.0 next2 nterms)))
(define (wallis-pi-iter nterms)
  (* 4 (product-iter wallis-term 2.0 next2 nterms)))

; Pretty print
(display "Factorial (linear recursive, then iterative)") 
(newline)
(fact 5)
(fact-iter 5)
(newline)
(newline)
(display "Wallis pi")
(newline)
(wallis-pi 100)
(wallis-pi-iter 100)
Output:
Factorial (linear recursive, then iterative)
120
120


Wallis pi
3.157030176455167
3.1570301764551667

Exercise 1.32

1. Show that sum and product (Exercise 1.31) are both special cases of a still more general notion called accumulate that combines a collection of terms, using some general accumulation function:

(accumulate 
 combiner null-value term a next b)

Accumulate takes as arguments the same term and range specifications as sum and product, together with a combiner procedure (of two arguments) that specifies how the current term is to be combined with the accumulation of the preceding terms and a null-value that specifies what base value to use when the terms run out. Write accumulate and show how sum and product can both be defined as simple calls to accumulate.

2. If your accumulate procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process.

Solution
#lang sicp


;(accumulate
; combiner null-value term a next b)
;; It helps to write out sum and product in terms of accumulate first.

(define (product term a next b)
  (accumulate * 1 term a next b))
(define (sum term a next b)
  (accumulate + 0 term a next b))
(define (accumulate combiner null-value term a next b)
  (if (> a b)
    null-value
    (combiner (term a)
      (accumulate combiner null-value term (next a) next b))))

;; Evidently there was no reason for me to define a scoped "iter"
;;    function earlier.
;; (define (accumulate-iter combiner null-value term a next b)
;;   (define (iter a result)
;;     (if (> a b)
;;          result
;;          (iter (next a) (combiner result (term a)))))
;;   (iter a null-value))
(define (accumulate-iter combiner null-value term a next b)
  (if (> a b)
       null-value
       (accumulate-iter combiner 
                       (combiner null-value (term a))
                        term
                        (next a)
                        next
                        b)))
(define (sum-iter term a next b)
  (accumulate-iter + 0 term a next b))

(define (add-one n)
  (+ n 1))
(define (my-term n)
  (/ 6.0 (* n n)))
; Should be pi
(sqrt (sum my-term 1 add-one 10000))
(sqrt (sum-iter my-term 1 add-one 10000))
Output:
3.1414971639472093
3.1414971639472147

Exercise 1.33

You can obtain an even more general version of accumulate (Exercise 1.32) by introducing the notion of a filter on the terms to be combined. That is, combine only those terms derived from values in the range that satisfy a specified condition. The resulting filtered-accumulate abstraction takes the same arguments as accumulate, together with an additional predicate of one argument that specifies the filter. Write filtered-accumulate as a procedure. Show how to express the following using filtered-accumulate:

  1. the sum of the squares of the prime numbers in the interval $a$ to $b$ (assuming that you have a prime? predicate already written)

  2. the product of all the positive integers less than $n$ that are relatively prime to $n$ (i.e., all positive integers $i \lt n$ such that $\textrm{GCD}(i, n) = 1$).

Solution
#lang sicp

(define (filtered-accumulate 
           combiner filter? null-value term a next b)
  (if (> a b)
       null-value
       (filtered-accumulate combiner 
                            filter?
                            (if (filter? a) 
                              (combiner null-value (term a))
                              null-value)
                            term
                            (next a)
                            next
                            b)))

(define (smallest-divisor n)
  (find-divisor n 2))
(define (square n) (* n n))
(define (find-divisor n test-divisor)
  (cond ((> (square test-divisor) n) 
         n)
        ((divides? test-divisor n) 
         test-divisor)
        (else (find-divisor 
               n 
               (+ test-divisor 1)))))
(define (divides? a b)
  (= (remainder b a) 0))
(define (prime? n)
  (= n (smallest-divisor n)))
(define (ssquare-prime n)
  (filtered-accumulate + prime? 0 square 2 add-one n))

(define (gcd a b)
  (if (= b 0)
      a
      (gcd b (remainder a b))))
(define (id n) n)
(define (add-one n) (+ n 1))
(define (coprime-prod n)
  (define (coprime? a)
    (= 1 (gcd a n)))
  (filtered-accumulate * coprime? 1 id 1 add-one n))

(define (print-seq func a b)
  (cond ((> a b) (display ""))
        ((= a b) (display (func a)))
        (else (display (func a))
              (display ", ")
              (print-seq func (+ a 1) b))))

(print-seq ssquare-prime 1 10)
(newline)
(print-seq coprime-prod 1 10)
Output:
0, 4, 13, 13, 38, 38, 87, 87, 87, 87
1, 1, 2, 3, 24, 5, 720, 105, 2240, 189
  1. https://oeis.org/A081738
  2. https://oeis.org/A001783

Exercise 1.34

Suppose we define the procedure

(define (f g) (g 2))

Then we have

(f square)
4
(f (lambda (z) (* z (+ z 1))))
6

What happens if we (perversely) ask the interpreter to evaluate the combination (f f)? Explain.

Solution

We'll end up calling (f 2) which calls (2 2), which should be a syntax error.

#lang sicp


(define (f g) (g 2))
(f f)
Output:
Error (exit code 1):
application: not a procedure;
 expected a procedure that can be applied to arguments
  given: 2
  context...:
   body of "/home/physbuzz/sicp/src/ch1/ch1-code/ex1-34.rkt"

Exercise 1.35

Show that the golden ratio $\varphi$ (1.2.2) is a fixed point of the transformation $x \mapsto 1 + 1 / x,$ and use this fact to compute $\varphi$ by means of the fixed-point procedure.

Solution

$\varphi$ is defined as the value such that $x=1+1/x,$ which is the definition of a fixed point. What matters more is if it's an attractive fixed point, which happens whenever $|f'(x)|\lt 1.$ Since $f'(x)=-1/x^2$ and $\varphi\gt 1,$ $\varphi$ is an attractive fixed point.

#lang sicp
(define tolerance 0.00001)
(define (fixed-point f first-guess)
  (define (close-enough? v1 v2)
    (< (abs (- v1 v2)) 
       tolerance))
  (define (try guess)
    (let ((next (f guess)))
      (if (close-enough? guess next)
          next
          (try next))))
  (try first-guess))

;; Fixed point calculation
(fixed-point (lambda (x) (+ 1 (/ 1.0 x))) 1.5)
;; Exact value
(/ (+ 1.0 (sqrt 5.0)) 2.0)
Output:
1.6180327868852458
1.618033988749895

Exercise 1.36

Modify fixed-point so that it prints the sequence of approximations it generates, using the newline and display primitives shown in Exercise 1.22. Then find a solution to $x^x = 1000$ by finding a fixed point of $x \mapsto {\log(1000) / \log(x)}$. (Use Scheme's primitive log procedure, which computes natural logarithms.) Compare the number of steps this takes with and without average damping. (Note that you cannot start fixed-point with a guess of 1, as this would cause division by $\log(1) = 0$.)

Solution

Without averaging, getting 1 part in $10^4$ accuracy takes 30 function evaluations (starting at 1.5). With averaging, 9 steps.

#lang sicp
(define tolerance 0.0001)
(define (fixed-point f first-guess)
  (define (close-enough? v1 v2)
    (< (abs (- v1 v2)) 
       tolerance))
  (define (disp guess n)
    (display "Step ")
    (display n)
    (display " = ")
    (display guess)
    (newline))
  (define (try guess n)
    (let ((next (f guess)))
      (if (close-enough? guess next)
          (disp next n)
          (try next (+ n 1)))))
  (try first-guess 1))
(define (average a b) (/ (+ a b) 2))
(define (my-f x) (/ (log 1000) (log x)))

(fixed-point my-f 1.5)
(fixed-point (lambda (x) (average x (my-f x))) 1.5)
Output:
Step 30 = 4.555506470667713
Step 9 = 4.555548906156018

I was confused on the let syntax; inside (define (try ...) ...) these two are equivalent:

 (define (try guess)
   (let ((next (f guess)))
     (if (close-enough? guess next)
         next
         (try next))))
 (define (try guess)
   ((lambda (next) 
      (if (close-enough? guess next)
      next
      (try next)))
    (f guess)))

Exercise 1.37

1. An infinite continued fraction is an expression of the form $$f \,=\, {\frac{N_1}{D_1 + \frac{N_2}{D_2 + \frac{N_3}{D_3 + \dots}}}.}$$ As an example, one can show that the infinite continued fraction expansion with the $N_i$ and the $D_i$ all equal to 1 produces $1 / \varphi$, where $\varphi$ is the golden ratio (described in 1.2.2). One way to approximate an infinite continued fraction is to truncate the expansion after a given number of terms. Such a truncation--a so-called k-term finite continued fraction--has the form $${\frac{N_1}{D_1 + \frac{N_2}{\ddots + \frac{N_k}{D_k}}}.}$$ Suppose that n and d are procedures of one argument (the term index $i$) that return the $N_i$ and $D_i$ of the terms of the continued fraction. Define a procedure cont-frac such that evaluating (cont-frac n d k) computes the value of the $k$-term finite continued fraction. Check your procedure by approximating $1 / \varphi$ using

(cont-frac (lambda (i) 1.0)
           (lambda (i) 1.0)
           k)

for successive values of k. How large must you make k in order to get an approximation that is accurate to 4 decimal places?

2. If your cont-frac procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process.

Solution

I find that building the continued fraction from the deepest nesting outwards leads naturally to an iterative algorithm, and going from the top level to the deepest nesting naturally leads to a linear iterative recursive algorithm.

I get that we need 11 levels (so that the deepest calls are to (d 11) and (n 11)) in order to get four digits of accuracy, however Solving SICP gets that we need 14 layers, they're probably just not being as precise.

#lang sicp

; Iterative loop for calculating continued fractions: start at the bottom
; value1 = (/ ( n k) (d k))
; value2 = (/ ( n (- k 1)) (+ (d (- k 1)) value1))
; value3 = (/ ( n (- k 2)) (+ (d (- k 2)) value2))
(define (cont-frac n d k)
  (define (bottom-up kprime val)
    (if (= kprime 0)
      val
      (bottom-up (- kprime 1) 
         (/ (n kprime) 
                (+ (d kprime) val)))))
  (bottom-up k 0))

; (/ (n 1) (+ (d 1) (terms 2 k)))
(define (cont-frac-recursive n d k)
  (define (top-down kprime)
    (if (= kprime k)
      (/ (n kprime) (d kprime))
      (/ (n kprime) (+ (d kprime) (top-down (+ kprime 1))))))
  (top-down 1))


(cont-frac (lambda (i) 1) (lambda (i) 1) 6)
(cont-frac-recursive (lambda (i) 1) (lambda (i) 1) 6)

(define tolerance 0.00005)
(define (n-accuracy f exact n)
  (if (< (abs (- (f n) exact)) tolerance)
    n
    (n-accuracy f exact (+ n 1))))

(let ((n (n-accuracy 
    (lambda (k) (cont-frac (lambda (i) 1) (lambda (i) 1) k))
    (/ (- (sqrt 5.0) 1) 2)
    1)))
  (display "n required for an accuracy of ")
  (display tolerance) 
  (display " is ")
  (display n)
  (newline)
  (display 0.6180339887498)
  (newline)
  (cont-frac (lambda (i) 1.0) (lambda (i) 1.0) n))
Output:
8/13
8/13
n required for an accuracy of 5e-5 is 11
0.6180339887498
0.6180555555555556

Exercise 1.38

In 1737, the Swiss mathematician Leonhard Euler published a memoir De Fractionibus Continuis, which included a continued fraction expansion for $e - 2$, where $e$ is the base of the natural logarithms. In this fraction, the $N_i$ are all 1, and the $D_i$ are successively:

1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, ...

Write a program that uses your cont-frac procedure from Exercise 1.37 to approximate $e$, based on Euler's expansion.

Solution
#lang sicp

(define (cont-frac n d k)
  (define (bottom-up kprime val)
    (if (= kprime 0)
      val
      (bottom-up (- kprime 1) 
         (/ (n kprime) 
                (+ (d kprime) val)))))
  (bottom-up k 0))

; `1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, ...`
(let ((n (lambda (k) 1.0) )
      (d (lambda (k) 
             (let ((m (remainder k 3)))
             (cond ((= m 1) 1)
                   ((= m 2) (* 2 (/ (+ k 1) 3)))
               (else 1))))))
  (display "11 terms: ")
  (display (cont-frac n d 11))
  (newline)
  (display "Exact   : 0.71828182845904"))
Output:
11 terms: 0.7182818352059925
Exact   : 0.71828182845904

Exercise 1.39

A continued fraction representation of the tangent function was published in 1770 by the German mathematician J.H. Lambert: $${\tan x} \,=\, {\frac{x}{1 - \frac{x^2}{3 - \frac{x^2}{5 - \dots}}}\,,}$$ where $x$ is in radians. Define a procedure (tan-cf x k) that computes an approximation to the tangent function based on Lambert's formula. k specifies the number of terms to compute, as in Exercise 1.37.

Solution

6 terms gives us a very accurate result for a range of angle arguments:

#lang sicp


;; (define (cont-frac-recursive n d k)
;;   (define (top-down kprime)
;;     (if (= kprime k)
;;       (/ (n kprime) (d kprime))
;;       (/ (n kprime) (+ (d kprime) (top-down (+ kprime 1))))))
;;   (top-down 1))
;; (define (tan x k)
;;   (define (top-down kprime xpow)
;;     (if (= kprime k)
;;       (/ xpow (- (* kprime 2) 1))
;;       (/ xpow (- (- (* kprime 2) 1) (top-down (+ kprime 1) (* xpow x))))))
;;   (top-down 1 x))

(define (tan x k)
  (let ((x2 (* x x)))
    (define (top-down kprime)
      (if (= kprime k)
        (/ x2 (- (* kprime 2) 1))
        (/ x2 (- (- (* kprime 2) 1) (top-down (+ kprime 1))))))
    (/ x (- 1 (top-down 2)))))

(display "tan(0.1) = 0.10033467208545054... (exact)")
(newline)
(display "tan(0.1) = ")
(tan 0.1 6)
(newline)
(display "tan(1.0) = 1.557407724654902... (exact)")
(newline)
(display "tan(1.0) = ")
(tan 1.0 6)
(newline)
(display "tan(1.55) = 48.07848247921896... (exact)")
(newline)
(display "tan(1.55) = ")
(tan 1.55 6)
(newline)
Output:
tan(0.1) = 0.10033467208545054... (exact)
tan(0.1) = 0.10033467208545055

tan(1.0) = 1.557407724654902... (exact)
tan(1.0) = 1.557407722401769

tan(1.55) = 48.07848247921896... (exact)
tan(1.55) = 48.07807712807702

Exercise 1.40

Define a procedure cubic that can be used together with the newtons-method procedure in expressions of the form

(newtons-method (cubic a b c) 1)

to approximate zeros of the cubic $x^3 + ax^2 + bx + c$.

Solution
#lang sicp

(define tolerance 0.0001)
(define (fixed-point f first-guess)
  (define (close-enough? v1 v2)
    (< (abs (- v1 v2)) 
       tolerance))
  (define (try guess)
    (let ((next (f guess)))
      (if (close-enough? guess next)
          next
          (try next))))
  (try first-guess))
(define dx 0.00001)
(define (deriv g)
  (lambda (x)
    (/ (- (g (+ x dx)) (g x))
       dx)))
(define (newton-transform g)
  (lambda (x)
    (- x (/ (g x)
            ((deriv g) x)))))
(define (newtons-method g guess)
  (fixed-point (newton-transform g)
               guess))
(define (cubic a b c)
  (lambda (x) (+ (* x x x) (* a x x) (* b x) c)))

(newtons-method (cubic -6 3 5) 5.0)
(display "5.2465513645585649... (exact)")
Output:
5.246551364558889
5.2465513645585649... (exact)

Exercise 1.41

Define a procedure double that takes a procedure of one argument as argument and returns a procedure that applies the original procedure twice. For example, if inc is a procedure that adds 1 to its argument, then (double inc) should be a procedure that adds 2. What value is returned by

(((double (double double)) inc) 5)
Solution
#lang sicp

(define (inc x) (+ x 1))
(define (double f) (lambda (x) (f (f x))))

((double inc) 1)

(((double (double double)) inc) 5)
Output:
3
21

We see that the expression in question causes sixteen function applications (so it prints out 21). Evaluating step by step:

(double double) is (lambda (x) (double (double x)))
(double (lambda (x) (double (double x))))
(lambda (y) ((lambda (x) (double (double x))) ((lambda (x) (double (double x))) y))) which is
(lambda (y) ((lambda (x) (double (double x))) (double (double y)))) which is
(lambda (y) (double (double (double (double y)))))

Which leads to $16=2^4$ function applications. Easy.

Exercise 1.42

Let $f$ and $g$ be two one-argument functions. The composition $f$ after $g$ is defined to be the function $x \mapsto f(g(x))$. Define a procedure compose that implements composition. For example, if inc is a procedure that adds 1 to its argument,

((compose square inc) 6)
49
Solution
#lang sicp
(define (square x) (* x x))
(define (inc x) (+ x 1))
(define (compose f g) (lambda (x) (f (g x))))
((compose square inc) 6)
Output:
49

Exercise 1.43

If $f$ is a numerical function and $n$ is a positive integer, then we can form the $n^{\text{th}}$ repeated application of $f$, which is defined to be the function whose value at $x$ is $f(f(\dots (f(x))\dots ))$. For example, if $f$ is the function $x \mapsto x + 1$, then the $n^{\text{th}}$ repeated application of $f$ is the function $x \mapsto x + n$. If $f$ is the operation of squaring a number, then the $n^{\text{th}}$ repeated application of $f$ is the function that raises its argument to the $2^n\text{-th}$ power. Write a procedure that takes as inputs a procedure that computes $f$ and a positive integer $n$ and returns the procedure that computes the $n^{\text{th}}$ repeated application of $f$. Your procedure should be able to be used as follows:

((repeated square 2) 5)
625

Hint: You may find it convenient to use compose from Exercise 1.42.

Solution
#lang sicp
(define (square x) (* x x))
(define (inc x) (+ x 1))
(define (compose f g) (lambda (x) (f (g x))))
(define (repeated f n) 
  (if (= n 1)
    f
   (compose f (repeated f (- n 1)))))
((repeated square 2) 5)
((repeated inc 50) 1)
Output:
625
51

Exercise 1.44

The idea of smoothing a function is an important concept in signal processing. If $f$ is a function and $dx$ is some small number, then the smoothed version of $f$ is the function whose value at a point $x$ is the average of $f(x - dx)$, $f(x)$, and $f(x + dx)$. Write a procedure smooth that takes as input a procedure that computes $f$ and returns a procedure that computes the smoothed $f$. It is sometimes valuable to repeatedly smooth a function (that is, smooth the smoothed function, and so on) to obtain the n-fold smoothed function. Show how to generate the n-fold smoothed function of any given function using smooth and repeated from Exercise 1.43.

Solution
#lang sicp
(define (square x) (* x x))
(define (inc x) (+ x 1))
(define (compose f g) (lambda (x) (f (g x))))
(define (repeated f n) 
  (if (= n 1)
    f
   (compose f (repeated f (- n 1)))))
(define dx 0.05)
(define (smooth f) (lambda (x) (/ (+ (f (+ x dx)) (f x) (f (- x dx))) 3)))

((smooth abs) 0)
(((repeated smooth 4) abs) 0)
(((repeated smooth 4) abs) 1.0)
Output:
0.03333333333333333
0.06419753086419754
0.9999999999999999

As an example, I smooth the $|x|$ function. As we round off the jagged point at x=0, the evaluation of smoothed abs rises. If we evaluate smoothed abs far away from zero, its value won't change.

Exercise 1.45

We saw in 1.3.3 that attempting to compute square roots by naively finding a fixed point of $y \mapsto x / y$ does not converge, and that this can be fixed by average damping. The same method works for finding cube roots as fixed points of the average-damped $y \mapsto x / y^2$. Unfortunately, the process does not work for fourth roots---a single average damp is not enough to make a fixed-point search for $y \mapsto x / y^3$ converge. On the other hand, if we average damp twice (i.e., use the average damp of the average damp of $y \mapsto x / y^3$) the fixed-point search does converge. Do some experiments to determine how many average damps are required to compute $n^{\text{th}}$ roots as a fixed-point search based upon repeated average damping of $y \mapsto x / y^{n-1}.$
Use this to implement a simple procedure for computing $n^{\text{th}}$ roots using fixed-point, average-damp, and the repeated procedure of Exercise 1.43. Assume that any arithmetic operations you need are available as primitives.

Solution

I admit I skipped the experimentation step. I conjecture that we need $\left\lfloor \frac{n}{2} \right\rfloor$ average-damps. I haven't tested if we need less than this, but this appears to work.

#lang sicp
(define tolerance 0.00001)

(define (square x) (* x x))
(define (inc x) (+ x 1))
(define (average a b) (/ (+ a b) 2.0))

;; Conjecture, to find the nth root we need (floor (/ n 2)) damping steps
(define (average-damp f)
  (lambda (x)
    (average x (f x))))
(define (compose f g) (lambda (x) (f (g x))))
(define (repeated f n) 
  (if (= n 1)
    f
   (compose f (repeated f (- n 1)))))

(define (fixed-point f first-guess)
  (define (close-enough? v1 v2)
    (< (abs (- v1 v2))
       tolerance))
  (define (try guess)
    (let ((next (f guess)))
      (if (close-enough? guess next)
          next
          (try next))))
  (try first-guess))
(define (pow x n)
  (exp (* n (log x))))
(define (nth-root n x)
  (fixed-point 
         ((repeated average-damp (floor (/ n 2))) (lambda (y) (/ x (pow y (- n 1)))))
         1.0))

(nth-root 2 2.0)
(nth-root 3 2.0)
(nth-root 4 2.0)
(nth-root 5 2.0)
(nth-root 6 2.0)
(nth-root 7 2.0)
(nth-root 8 2.0)
Output:
1.4142135623746899
1.259923236422975
1.189207115002721
1.1486967244204176
1.1224645830339175
1.1040904628970067
1.0905020457630394

Compare to the exact values:

1.414213562373095048...
1.259921049894873164...
1.189207115002721066...
1.148698354997035006...
1.122462048309372981...
1.104089513673812337...
1.090507732665257659...

Exercise 1.46

Several of the numerical methods described in this chapter are instances of an extremely general computational strategy known as iterative improvement. Iterative improvement says that, to compute something, we start with an initial guess for the answer, test if the guess is good enough, and otherwise improve the guess and continue the process using the improved guess as the new guess. Write a procedure iterative-improve that takes two procedures as arguments: a method for telling whether a guess is good enough and a method for improving a guess. Iterative-improve should return as its value a procedure that takes a guess as argument and keeps improving the guess until it is good enough. Rewrite the sqrt procedure of 1.1.7 and the fixed-point procedure of 1.3.3 in terms of iterative-improve.

Solution
#lang sicp

(define (average a b) (/ (+ a b) 2.0))

(define (iterative-improve good-enough? improve)
  (lambda (guess) 
    (let ((guess2 (improve guess)))
      (if (good-enough? guess guess2)
    guess2
    ((iterative-improve good-enough? improve) guess2)
      ))))

(define tolerance 0.0001)
(define (my-close-enough? v1 v2)
  (< (abs (- v1 v2))
     tolerance))
(define (my-sqrt x)
  ((iterative-improve 
     my-close-enough? 
    (lambda (guess) (average guess (/ x guess)))) 1.0))

(define (fixed-point f first-guess)
  ((iterative-improve my-close-enough? f) first-guess))

(my-sqrt 2.0)

(fixed-point cos 1.0)
Output:
1.4142135623746899
0.7390547907469174

One interesting thought: iterative-improve feels to me like it's very poorly written. It works, but a well-written lightning-fast implementation would exploit tail recursion. Is there a tail recursion optimization here? I'm wondering if tail recursion is trivial here.

I don't think that our evaluation model is advanced enough to answer that question. I'm also too lazy to check by timing this code.