Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion bitreq/src/request.rs
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ impl Request {
pub fn with_json<T: serde::ser::Serialize>(mut self, body: &T) -> Result<Request, Error> {
self.headers
.insert("Content-Type".to_string(), "application/json; charset=UTF-8".to_string());
match serde_json::to_string(&body) {
match serde_json::to_vec(&body) {
Ok(json) => Ok(self.with_body(json)),
Err(err) => Err(Error::SerdeJsonError(err)),
}
Expand Down
2 changes: 1 addition & 1 deletion bitreq/src/response.rs
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ impl Response {
where
T: serde::de::Deserialize<'a>,
{
match serde_json::from_str(self.as_str()?) {
match serde_json::from_slice(self.as_bytes()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really prefer the serde error over the utf8 error? I'm skeptical the claimed perf difference matters (notably when deserializing a string in the json serde skips utf8 validation if you call from_str whereas has to do it if you call from_slice). I don't have a strong feeling on the error but the commit should explain why we prefer the swap.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Matt see below, I have expanded the commit message.

I'm skeptical the claimed perf difference matters

The claim is that a reduction in cache misses leads to non-trivial performance gains.

We still do the same amount of work on the data.

Ok(json) => Ok(json),
Err(err) => Err(Error::SerdeJsonError(err)),
}
Expand Down