To be able to edit code and run cells, you need to run the notebook yourself. Where would you like to run the notebook?

This notebook takes about 3 minutes to run.

In the cloud (experimental)

Binder is a free, open source service that runs scientific notebooks in the cloud! It will take a while, usually 2-7 minutes to get a session.

On your computer

(Recommended if you want to store your changes.)

  1. Download the notebook:
  2. Run Pluto

    (Also see: How to install Julia and Pluto)

  3. Open the notebook file

    Type the saved filename in the open box.

Frontmatter

If you are publishing this notebook on the web, you can set the parameters below to provide HTML metadata. This is useful for search engines and social media.

Preview

Author 1

Student: Harry Potter (ID# 123456789)

md"""Student: **$(student.name)** (ID# $(student.ubcid))"""
77.3 ms
student
# Fill in these fields, and execute the field either by hiting "Shift-Enter" or clicking the play button at the bottom right of this cell.
student = (name = "Harry Potter", ubcid = "123456789")
43.4 μs

CPSC 406 — Homework 7

md"""
# CPSC 406 — Homework 7
"""
3.5 ms

Exercise 1. Projection onto a halfspace

A nonzero n-vector $a$ and a scalar $\beta$ define the halfspace

$$H^{-}_{a,β}=\{x∈\Re^n\mid ⟨a,x⟩≤β\}.$$

(Part a). Derive an explicit expression for the projection of an arbitrary n-vector b onto $H^-_{a,β}$.

$$\mbox{proj}_{H^-_{a,β}}(b) = \begin{cases} blah & \mbox{if $...$}\\ blah & \mbox{if $...$} \end{cases}$$

md"""
## Exercise 1. Projection onto a halfspace
A nonzero n-vector $a$ and a scalar $\beta$ define the halfspace

$$H^{-}_{a,β}=\{x∈\Re^n\mid ⟨a,x⟩≤β\}.$$

**(Part a)**. Derive an explicit expression for the projection of an arbitrary n-vector b onto $H^-_{a,β}$.

$$\mbox{proj}_{H^-_{a,β}}(b) =
\begin{cases}
blah & \mbox{if $...$}\\
blah & \mbox{if $...$}
\end{cases}$$
"""
7.5 ms
# Answer here
8.8 μs

(Part b.) Implement your solution above by completing the following function.

md"""
**(Part b.)** Implement your solution above by completing the following function.
"""
281 μs
# Answer here
"""
Project the n-vector `b` onto the halfspace {x ∣ ⟨a,x⟩ ≤ β} defined by the nonzero vector `a` and the scalar `β`.
"""
function proj_halfspace(b, a, β)
# complete ...
end;
801 μs

(Part c.) Verify numerically with a few examples that your projection code is correct.

md"**(Part c.)** Verify numerically with a few examples that your projection code is correct."
277 μs
# code here...
8.9 μs

Exercise 2. Alternating projection method

The function projsplx, defined in the Helper Function Section below, computes the projection of any n-vector onto the probability simplex $\{x\in\Re^n\mid \sum_j^nx_j=1,\ x≥0\}$. The cell below shows the result of the projection of a vector b onto the simplex.

md"""
## Exercise 2. Alternating projection method

The function `projsplx`, defined in the _Helper Function Section_ below, computes the projection of any n-vector onto the probability simplex $\{x\in\Re^n\mid \sum_j^nx_j=1,\ x≥0\}$. The cell below shows the result of the projection of a vector `b` onto the simplex.
"""
18.9 ms
let
b = [12.0, -23.0, 16.0]
end
40.8 μs

Now suppose that we wish to compute the projection onto the intersection of the probability simplex and an arbitrary halfspace. Can you do this using the projection function proj_halfspace, which you implemented above, and proj_simplex?

Yes, you can, using the Projection Onto Convex Sets algorithm, which iteratively projects onto each convex set. The algorithm is simple, and involves the following iteration:

$$x_{k+1}=P_1(P_2(x_k)),$$

where $P_1$ and $P_2$ are projection operators onto the first and second sets. Implement this algorithm for generic projection operators $P_1$ and $P_2$. Exit the loop when $\|x_{k+1}-x_k\|≤ϵ\|x_0\|$ for some positive tolerance $\epsilon$.

Note that this method can in some situations take many iterations.

md"""
Now suppose that we wish to compute the projection onto the **intersection** of the probability simplex and an arbitrary halfspace. Can you do this using the projection function `proj_halfspace`, which you implemented above, and `proj_simplex`?

Yes, you can, using the **[Projection Onto Convex Sets](https://en.wikipedia.org/wiki/Projections_onto_convex_sets)** algorithm, which iteratively projects onto each convex set. The algorithm is simple, and involves the following iteration:

$$x_{k+1}=P_1(P_2(x_k)),$$

where $P_1$ and $P_2$ are projection operators onto the first and second sets. Implement this algorithm for generic projection operators $P_1$ and $P_2$. Exit the loop when $\|x_{k+1}-x_k\|≤ϵ\|x_0\|$ for some positive tolerance $\epsilon$.

Note that this method can in some situations take many iterations.
"""
604 μs

(Part a.) Complete the following function.

md"**(Part a.)** Complete the following function."
237 μs
"""
Project a vector `x` onto the intersection of two convex sets. Takes as input two functions `p1` and `p2` that each take a vector and return the projection of `b0` onto the corresponding sets.
"""
function proj_intersection(x::Vector, p1::Function, p2::Function; maxits=100, ϵ=1e-7)
err = []
x0 = copy(x)
ϵ *= norm(x0, Inf)
for i1:maxits
# complete....
push!(err, norm(x-x0, Inf))
err[end] ≤ ϵ && break
x0 = copy(x)
end
return x, err
end;
2.4 ms

(Part b). Test your implementationo f proj_intersection on a small problem that aims to find the projection of a vector onto the intersection of the unit probability simplex and a halfspace. Verify that the answer is correct.

Warning

Make sure that your test problem is feasible – i.e., that the intersection isn't empty.

md"""
**(Part b).** Test your implementationo f `proj_intersection` on a small problem that aims to find the projection of a vector onto the intersection of the unit probability simplex and a halfspace. Verify that the answer is correct.

!!! warning
Make sure that your test problem is **feasible** -- i.e., that the intersection isn't empty.
"""
305 μs
# Answer here
8.7 μs

Exercise 3. Projected gradient method.

md"""
## Exercise 3. Projected gradient method.
"""
194 μs

(Part a). The function below is meant to minimize a simple quadratic function $\frac12 x^T Qx$ over a convex set $C$. Complete the implementation. Use a fixed step size equal to $1/L$, where $L$ is the Lipschitz constant of the gradient.

md"""
**(Part a).** The function below is meant to minimize a simple quadratic function $\frac12 x^T Qx$ over a convex set $C$. Complete the implementation. Use a fixed step size equal to $1/L$, where $L$ is the Lipschitz constant of the gradient.
"""
272 μs
"""
Projected gradient method for the quadratic optimization problem

minimize_{x} 1/2 x'Qx subj to x ∈ C.

The function `proj(b)` compute the projection of the vector `b` onto the convex set 𝒞.
"""
function pgrad(Q, proj::Function; maxits=100, ϵ=1e-5)
err = []
# complete...
return x, err
end;
2.8 ms

(Part b). Use the function pgrad to obtain the minimum-risk portfolio from a set of assets with returns r and covariances Σ, computed here:

md"""
**(Part b).** Use the function `pgrad` to obtain the minimum-risk portfolio from a set of assets with returns `r` and covariances `Σ`, computed here:
"""
251 μs
r, Σ = meancov(df); # `meancov` is implemented in the Helper Functions section.
1.1 s
# Complete this function
function efficient_portfolio(r, Σ, μ)
n = length(r)
p1 = # ...
p2 = # ...
p = b -> proj_intersection(b, p1, p2, maxits=10000, ϵ=1e-8)[1]
x, err = pgrad(Σ, p, maxits=1000)
end;
1.8 ms

UndefVarError: x not defined

  1. #pgrad#2@Other: 11[inlined]
  2. efficient_portfolio@Other: 7[inlined]
  3. top-level scope@Local: 1[inlined]
min_risk_portfolio, min_risk_itn_errors = efficient_portfolio(r, Σ, 0)
---

The plot below shows the error in the optimality of the iterations, as measured by

$$\|x_{k+1}-x_k\|.$$

md"""
The plot below shows the error in the optimality of the iterations, as measured by

$$\|x_{k+1}-x_k\|.$$
"""
230 μs

UndefVarError: min_risk_itn_errors not defined

  1. top-level scope@Local: 1
plot(min_risk_itn_errors, yscale=:log10, xlab="iterations", ylab="error", leg=false)
---

(Part c). Use the function efficient_portfolio to compute an optimal portfolio with expected return of 10%.

md"""
**(Part c).** Use the function `efficient_portfolio` to compute an optimal portfolio with expected return of 10%.
"""
247 μs

syntax: invalid identifier name "..."

xp1, xp1_itn_errors = efficient_portfolio(...)
---

UndefVarError: xp1_itn_errors not defined

  1. top-level scope@Local: 1
plot(xp1_itn_errors, yscale=:log10, xlab="iterations", ylab="error", leg=false)
---

Appendix: Helper functions

md"""
## Appendix: Helper functions
"""
204 μs
using LinearAlgebra, Random, CSV, DataFrames, Statistics, Plots
10.0 s
"""
Projection onto the unit simplex.

projsplx(b) -> x

Variant of `projsplx`.
"""
function proj_simplex(b::Vector; τ=1.0)
x = copy(b)
return x
end;
1.5 ms
"""
Projection onto the unit simplex {x | sum(x) = τ, x ≥ 0}.

projsplx!(b, τ)

In-place variant of `projsplx`.
"""
function projsplx!(b::Vector{T}, τ::T) where T

n = length(b)
bget = false

idx = sortperm(b, rev=true)
tsum = zero(T)

@inbounds for i = 1:n-1
tsum += b[idx[i]]
tmax = (tsum - τ)/i
if tmaxb[idx[i+1]]
bget = true
break
end
end

if !bget
tmax = (tsum + b[idx[n]] - τ) / n
end

@inbounds for i = 1:n
b[i] = max(b[i] - tmax, 0)
end
end;
6.1 ms
stocks =
["AAPL", "META", "GOOG", "MSFT", "NVDA", # technology
"AMZN", "COST", "EBAY", "TGT", "WMT", # services
"BMO", "BNS", "HBCP", "RY", "TD", # finance
"BP", "CVX", "IMO", "TTE", "XOM" # energy
];
19.5 μs
tmpdir = mktempdir();
105 μs
for s in stocks
url = "https://query1.finance.yahoo.com/v7/finance/download/$(s)?period1=1647815267&period2=1679351267&interval=1d&events=history&includeAdjustedClose=true"
Base.download(url, joinpath(tmpdir,"$(s).csv"))
end
1.7 s
df = map(stocks) do stock
f = joinpath(tmpdir,"$(stock).csv")
dfs = DataFrame(CSV.File(f, select=[1,6])) # keep only date and closing price
rename!(dfs, "Adj Close" => stock)
select!(dfs, :Date, "$stock" .=> p->(p.-p[1])/p[1])
rename!(dfs, "$(stock)_function" => "$stock")
end |> z->innerjoin(z..., on=:Date);
14.2 s
function meancov(df)
r = mean.(eachcol(df[:,2:end]))
Σ = cov(Matrix(df[:,2:end]))
return r, Σ
end;
690 μs