I can give you a quick-and-dirty informal analysis, but you'll need to do any formal work yourself. OK, let's start with this one:
(define (accumulate proc init alist)
(cond ((null? alist) init)
(else proc (car alist) (accumulate proc init (cdr alist))))))
Since this is a functional, the actual time and space will depend on the function being applied, but the (accumulate) function itself is a simple linear time - O(n) - operation: it makes exactly one pass over each of the items in alist. Since the function is not tail recursive (it accumulates the results from the later values first, then performs the final operation afterwards), the function is called for each pass, so it is also linear in space.
Now for (insert):
(define (insert x alist)
(cond ((null? alist) (cons x alist))
((> x (car alist))
(cons (car alist) (insert x (cdr alist))))
(else (cons x alist))))
Here, the function tests each element of a sorted list: if the value of x is greater than the current element (the (car) of alist), then it recurses on the remainder of the list and conses the result to the current element. If it finds an element greater than or equal to x, then it returns the remainder of alist consed to x. If it reaches the end of the list without finding a larger value, it simply returns x. Walking through an example (insert 4 '(1 3 3 4 7)), you would get
(car alist) == 1, 4 > 1, ->
(cons 1 (insert 4 '(3 3 4 7)))
(car (cdr alist)) == 3, 4 > 3 ->
(cons 1 (cons 3 (insert 4 '(3 4 7))))
(car (cdr (cdr alist))) == 3, 4 > 3 ->
(cons 1 (cons 3 (cons 3 (insert 4 '(4 7)))))
(car (cdr (cdr (cdr alist)))) == 4, 4== 4 ->
(cons 1 (cons 3 (cons 3 (cons 4 '(4 7)))))
As you can see, the algorithm would pass through each element at most once, so again, the upper bound of the time is linear; in the worst case (i.e., x is higher than any current element of alist) will take one pass per list element, plus one for the null element at the end of the list. Since the algorithm is non-tail recursive, and there is one recursive call per element, the space is also linear worst case (actually, it is marginally less than linear, since the list decreases with each pass, but that's not really consequential).
This should give you some idea of how to at least estimate the efficiency of relatively simple functions like these. Keep in mind that the values for big-O notation are relative efficency; they don't measure the absolute speed of the function (which would depend on the particular implementation), but rather, how the efficency scales with size of the arguments.
References:
Wikipedia: Big O Notation (http://en.wikipedia.org/wiki/Big_O_notation)
NIST article: Big O Notation (http://www.nist.gov/dads/HTML/bigOnotation.html)
Complexity and Big O Notation (http://www.cs.wisc.edu/~hasti/cs367-common/notes/COMPLEXITY.html)
Big-O Notation and Algorithm Analysis (http://www.eecs.harvard.edu/~ellard/Q-97/HTML/root/node8.html)